McGraw-Hill - Standard Handbook of Electronic Engineering.pdf - Home

Feb 5, 2013 - Systems Engineering and Systems Management. 2.1 ... handbook covers information sources, codes and coding, .... A typical block diagram of the resulting system is shown in Fig. ...... made to reconstruct the function using Eq. (81) with sample values ...... Such areas as documentation and communication.
61MB taille 21 téléchargements 950 vues
Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

P



A



R



T

1

PRINCIPLES AND TECHNIQUES Section 1. Information, Communication, Noise, and Interference

1.3

Section 2. Systems Engineering and Systems Management

2.1

Section 3. Reliability

3.1

Section 4. Computer-Assisted Digital System Design

4.1

On the CD-ROM Basic Phenomena Mathematics, Formulas, Definitions, and Theorems Circuit Principles

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.2

PRINCIPLES AND TECHNIQUES

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 1

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE The telephone profoundly changed our methods of communication, thanks to Alexander Graham Bell and other pioneers (Bell, incidentally, declined to have a telephone in his home!). Communication has been at the heart of the information age. Electronic communication deals with transmitters and receivers of electromagnetic waves. Even digital communications systems rely on this phenomenon. This section of the handbook covers information sources, codes and coding, communication channels, error correction, continuous and band-limited channels, digital data transmission and pulse modulation, and noise and interference. C.A.

In This Section: CHAPTER 1.1 COMMUNICATION SYSTEMS CONCEPTS SELF-INFORMATION AND ENTROPY ENTROPY OF DISCRETE RANDOM VARIABLES MUTUAL INFORMATION AND JOINT ENTROPY CHAPTER 1.2 INFORMATION SOURCES, CODES, AND CHANNELS MESSAGE SOURCES MARKOV INFORMATION SOURCE NOISELESS CODING NOISELESS-CODING THEOREM CONSTRUCTION OF NOISELESS CODES CHANNEL CAPACITY DECISION SCHEMES THE NOISY-CODING THEOREM ERROR-CORRECTING CODES PARITY-CHECK CODES OTHER ERROR-DETECTING AND ERROR-CORRECTING CODES CONTINUOUS-AMPLITUDE CHANNELS MAXIMIZATION OF ENTROPY OF CONTINUOUS DISTRIBUTIONS GAUSSIAN SIGNALS AND CHANNELS BAND-LIMITED TRANSMISSION AND THE SAMPLING THEOREM

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

1.7 1.7 1.7 1.8 1.9 1.12 1.12 1.13 1.14 1.15 1.16 1.17 1.19 1.20 1.21 1.23 1.25 1.25 1.26 1.27 1.29

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.4

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

CHAPTER 1.3 MODULATION MODULATION THEORY ELEMENTS OF SIGNAL THEORY DURATION AND BANDWIDTH–UNCERTAINTY RELATIONSHIPS CONTINUOUS MODULATION LINEAR, OR AMPLITUDE, MODULATION DOUBLE-SIDEBAND AMPLITUDE MODULATION (DSBAM) DOUBLE-SIDEBAND AMPLITUDE MODULATION, SUPPRESSED CARRIER VESTIGIAL-SIDEBAND AMPLITUDE MODULATION (VSBAM) SINGLE-SIDEBAND AMPLITUDE MODULATION (SSBAM) BANDWIDTH AND POWER RELATIONSHIPS FOR AM ANGLE (FREQUENCY AND PHASE) MODULATION

1.32 1.32 1.33 1.36 1.37 1.38 1.39 1.40 1.41 1.41 1.41 1.42

CHAPTER 1.4 DIGITAL DATA TRANSMISSION AND PULSE MODULATION DIGITAL TRANSMISSION PULSE-AMPLITUDE MODULATION (PAM) QUANTIZING AND QUANTIZING ERROR SIGNAL ENCODING BASEBAND DIGITAL-DATA TRANSMISSIONS PULSE-CODE MODULATION (PCM) SPREAD-SPECTRUM SYSTEMS

1.44 1.44 1.44 1.45 1.46 1.48 1.50 1.51

CHAPTER 1.5 NOISE AND INTERFERENCE GENERAL RANDOM PROCESSES CLASSIFICATION OF RANDOM PROCESSES ARTIFICIAL NOISE

1.52 1.52 1.52 1.54 1.55

Section Bibliography: Of Historical Significance Davenport, W. B., Jr., and W. L. Root, “An Introduction to the Theory of Random Signals and Noise,” McGraw-Hill, 1958. (Reprint edition published by IEEE Press, 1987.) Middleton, D., “Introduction to Statistical Communication Theory,” McGraw-Hill, 1960. (Reprint edition published by IEEE Press, 1996.) Sloane, N. J. A., and A. D. Wyner (eds.), “Claude Elwood Shannon: Collected Papers,” IEEE Press, 1993. General Carlson, A. B., et al., “Communications Systems,” 4th ed., McGraw-Hill, 2001. Gibson, J. D., “Principles of Digital and Analog Communications,” 2nd ed., Macmillan, 1993. Haykin, S., “Communication Systems,” 4th ed., Wiley, 2000. Papoulis, A., and S. U. Pillai, “Probability, Random Variables, and Stochastic Processes,” 4th ed., McGraw-Hill, 2002. Thomas, J. B., “An Introduction to Communication Theory and Systems,” Springer-Verlag, 1987. Ziemer, R. E., and W. H. Tranter, “Principles of Communications: Systems, Modulation, and Noise,” 5th ed., Wiley, 2001. Information Theory Blahut, R. E., “Principles and Practice of Information Theory,” Addison-Wesley, 1987. Cover, T. M., and J. A. Thomas, “Elements of Information Theory,” Wiley, 1991. Gallagher, R., “Information Theory and Reliable Communication,” Wiley, 1968.

1.4 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.5

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

Coding Theory Blahut, R. E., “Theory and Practice of Error Control Codes,” Addison-Wesley, 1983. Clark, G. C., Jr., and J. B. Cain, “Error-correction Coding for Digital Communications,” Plenum Press, 1981. Lin, S., and D. J. Costello, “Error Control Coding,” Prentice-Hall, 1983. Digital Data Transmission Barry, J. R., D. G. Messerschmitt, and E. A. Lee, “Digital Communications,” 3rd ed., Kluwer, 2003. Proakis, J. G., “Digital Communications,” 4th ed., McGraw-Hill, 2000.

1.5 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.6

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.7

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 1.1

COMMUNICATION SYSTEMS Geoffrey C. Orsak, H. Vincent Poor, John B. Thomas

CONCEPTS The principal problem in most communication systems is the transmission of information in the form of messages or data from an originating information source S to a destination or receiver D. The method of transmission is frequently by means of electric signals under the control of the sender. These signals are transmitted via a channel C, as shown in Fig. 1.1.1. The set of messages sent by the source will be denoted by {U}. If the channel were such that each member of U were received exactly, there would be no communication problem. However, because of channel limitations and noise, a corrupted version {U*} of {U} is received at the information destination. It is generally desired that the distorting effects of channel imperfections and noise be minimized and that the number of messages sent over the channel in a given time be maximized. These two requirements are interacting, since, in general, increasing the rate of message transmission increases the distortion or error. However, some forms of message are better suited for transmission over a given channel than others, in that they can be transmitted faster or with less error. Thus it may be desirable to modify the message set {U} by a suitable encoder E to produce a new message set {A} more suitable for a given channel. Then a decoder E −1 will be required at the destination to recover {U*} from the distorted set {A*}. A typical block diagram of the resulting system is shown in Fig. 1.1.2.

SELF-INFORMATION AND ENTROPY Information theory is concerned with the quantification of the communications process. It is based on probabilistic modeling of the objects involved. In the model communication system given in Fig. 1.1.1, we assume that each member of the message set {U} is expressible by means of some combination of a finite set of symbols called an alphabet. Let this source alphabet be denoted by the set {X} with elements x1, x2, . . . , xM, where M is the size of the alphabet. The notation p(xi), i = 1, 2, . . . , M, will be used for the probability of occurrence of the ith symbol xi. In general the set of numbers {p(xi)} can be assigned arbitrarily provided that p(xi) ≥ 0

i = 1, 2, . . . , M

(1)

and M

∑=1 p( xi ) = 1

(2)

i

1.7 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.8

COMMUNICATION SYSTEMS 1.8

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

FIGURE 1.1.1

Basic communication system.

A measure of the amount of information contained in the ith symbol xi can be defined based solely on the probability p(xi). In particular, the self-information I(xi) of the ith symbol xi is defined as I(xi) = log 1/p(xi) = −log p(xi)

(3)

This quantity is a decreasing function of p(xi) with the endpoint values of infinity for the impossible event and zero for the certain event. It follows directly from Eq. (3) that I(xi) is a discrete random variable, i.e., a real-valued function defined on the elements xi of a probability space. Of the various statistical properties of this random variable I(xi), the most important is the expected value, or mean, given by M

M

i =1

i =1

E{I ( xi )} = H ( X ) = ∑ p( xi ) I ( xi ) = − ∑ p( xi ) log p( xi )

(4)

This quantity H(X) is called the entropy of the distribution p(xi). If p(xi) is interpreted as the probability of the ith state of a system in phase space, then this expression is identical to the entropy of statistical mechanics and thermodynamics. Furthermore, the relationship is more than a mathematical similarity. In statistical mechanics, entropy is a measure of the disorder of a system; in information theory, it is a measure of the uncertainty associated with a message source. In the definitions of self-information and entropy, the choice of the base for the logarithm is arbitrary, but of course each choice results in a different system of units for the information measures. The most common bases used are base 2, base e (the natural logarithm), and base 10. When base 2 is used, the unit of I(⋅) is called the binary digit or bit, which is a very familiar unit of information content. When base e is used, the unit is the nat; this base is often used because of its convenient analytical properties in integration, differentiation, and the like. The base 10 is encountered only rarely; the unit is the Hartley.

ENTROPY OF DISCRETE RANDOM VARIABLES The more elementary properties of the entropy of a discrete random variable can be illustrated with a simple example. Consider the binary case, where M = 2, so that the alphabet consists of the symbols 0 and 1 with probabilities p and 1 − p, respectively. It follows from Eq. (4) that H1(X) = −[p log2 p + (1 − p) log2 (1 − p)] (bits)

FIGURE 1.1.2

Communication system with encoding and decoding.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(5)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.9

COMMUNICATION SYSTEMS COMMUNICATION SYSTEMS

1.9

Equation (5) can be plotted as a function of p, as shown in Fig. 1.1.3, and has the following interesting properties: 1. H1(X) ≥ 0. 2. H1(X) is zero only for p = 0 and p = 1. 3. H1(X) is a maximum at p = 1 − p = 1/2. More generally, it can be shown that the entropy H(X) has the following properties for the general case of an alphabet of size M: 1. H(X) ≥ 0. (6) 2. H(X) = 0 if and only if all of the probabilities are zero FIGURE 1.1.3 Entropy in the binary case. except for one, which must be unity. (7) 3. H(X) ≤ logb M. (8) 4. H(X) = logb M if and only if all the probabilities are equal so that p(xi) = 1/M for all i. (9)

MUTUAL INFORMATION AND JOINT ENTROPY The usual communication problem concerns the transfer of information from a source S through a channel C to a destination D, as shown in Fig. 1.1.1. The source has available for forming messages an alphabet X of size M. A particular symbol x1 is selected from the M possible symbols and is sent over the channel C. It is the limitations of the channel that produce the need for a study of information theory. The information destination has available an alphabet Y of size N. For each symbol xi sent from the source, a symbol yj is selected at the destination. Two probabilities serve to describe the “state of knowledge” at the destination. Prior to the reception of a communication, the state of knowledge of the destination about the symbol xj is the a priori probability p(xi) that xi would be selected for transmission. After reception and selection of the symbol yj, the state of knowledge concerning xi is the conditional probability p(xi yj), which will be called the a posteriori probability of xi. It is the probability that xi was sent given that yj was received. Ideally this a posteriori probability for each given yj should be unity for one xi and zero for all other xi. In this case an observer at the destination is able to determine exactly which symbol xi has been sent after the reception of each symbol yj. Thus the uncertainty that existed previously and which was expressed by the a priori probability distribution of xi has been removed completely by reception. In the general case it is not possible to remove all the uncertainty, and the best that can be hoped for is that it has been decreased. Thus the a posteriori probability p(xi yj) is distributed over a number of xi but should be different from p(xi). If the two probabilities are the same, then no uncertainty has been removed by transmission or no information has been transferred. Based on this discussion and on other considerations that will become clearer later, the quantity I(xi; yj) is defined as the information gained about xi by the reception of yj, where I(xi; yj) = logb [p(xi yj)/p(xi)]

(10)

This measure has a number of reasonable and desirable properties. Property 1.

The information measure I(xi; yj) is symmetric in xi and yj; that is, I(xi; yj) = I(yj; xi)

(11)

The mutual information I(xi; yj) is a maximum when p(xi yj) = 1, that is, when the reception of yj completely removes the uncertainty concerning xi: Property 2.

I(xi; yj) ≤ − log p(xi) = (xi)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(12)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.10

COMMUNICATION SYSTEMS 1.10

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

If two communications yj and zk concerning the same message xi are received successively, and if the observer at the destination takes the a posteriori probability of the first as the a priori probability of the second, then the total information gained about xi is the sum of the gains from both communications:

Property 3.

I(xi; yj, zk) = I(xi; yj) + I(xi; zk yj)

(13)

Property 4. If two communications yj and yk concerning two independent messages xi and xm are received, the total information gain is the sum of the two information gains considered separately:

I(xi, xm; yj, yk) = I(xi; yj) + I(xm; yk)

(14)

These four properties of mutual information are intuitively satisfying and desirable. Moreover, if one begins by requiring these properties, it is easily shown that the logarithmic definition of Eq. (10) is the simplest form that can be obtained. The definition of mutual information given by Eq. (10) suffers from one major disadvantage. When errors are present, an observer will not be able to calculate the information gain even after the reception of all the symbols relating to a given source symbol, since the same series of received symbols may represent several different source symbols. Thus, the observer is unable to say which source symbol has been sent and at best can only compute the information gain with respect to each possible source symbol. In many cases it would be more desirable to have a quantity that is independent of the particular symbols. A number of quantities of this nature will be obtained in the remainder of this section. The mutual information I(xi; yj) is a random variable just as was the self-information I(xi); however, two probability spaces X and Y are involved now, and several ensemble averages are possible. The average mutual information I(X; Y ) is defined as a statistical average of I(xi; yj) with respect to the joint probability p(xi; yj); that is, I ( X ; Y ) = E XY {I ( xi ; y j )} = ∑ ∑ p( xi , y j ) log[ p( xi y j ) /p( xi )] i

(15)

j

This new function I(X; Y ) is the first information measure defined that does not depend on the individual symbols xi or yj.. Thus, it is a property of the whole communication system and will turn out to be only the first in a series of similar quantities used as a basis for the characterization of communication systems. This quantity I(X; Y ) has a number of useful properties. It is nonnegative; it is zero if and only if the ensembles X and Y are statistically independent; and it is symmetric in X and Y so that I(X; Y) = I(Y; X). A source entropy H(X) was given by Eq. (4). It is obvious that a similar quantity, the destination entropy H(Y), can be defined analogously by N

H (Y ) = − ∑ p( y j ) log p( y j )

(16)

j =1

This quantity will, of course, have all the properties developed for H(X). In the same way the joint or system entropy H(X, Y ) can be defined by M

N

H ( X , Y ) = − ∑ ∑ p( xi , y j ) log p( xi , y j ) i =1 j =1

(17)

If X and Y are statistically independent so that p(xi, yj) = p(xi)p( yj) for all i and j, then Eq. (17) can be written as H(X, Y ) = H(X ) + H(Y )

(18)

On the other hand, if X and Y are not independent, Eq. (17) becomes H(X, Y ) = H(X ) + H(Y X ) = H(Y ) + H(XY)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(19)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.11

COMMUNICATION SYSTEMS COMMUNICATION SYSTEMS

1.11

where H(Y X ) and H(X Y ) are conditional entropies given by M

N

H (Y X ) = − ∑ ∑ p( xi , y j ) log p( y j xi )

(20)

i =1 j =1

and by M

N

H ( X Y ) = − ∑ ∑ p( xi , y j ) log p( xi y j ) i =1 j =1

(21)

These conditional entropies each satisfies an important inequality 0 ≤ H(Y H) ≤ H(Y )

(22)

0 ≤ H(X Y) ≤ H(X)

(23)

and

It follows from these last two expressions that Eq. (15) can be expanded to yield I(X; Y) = −H(X, Y) + H(X) + H(Y) ≥ 0

(24)

This equation can be rewritten in the two equivalent forms I(X; Y ) = H(Y) − H(Y X) ≥ 0

(25)

I(X Y) = H(X ) − H(X Y) ≥ 0

(26)

or

It is also clear, say from Eq. (24), that H(X, Y) satisfies the inequality H(X, Y) ≤ H(X ) + H(Y)

(27)

Thus, the joint entropy of two ensembles X and Y is a maximum when the ensembles are independent. At this point it may be appropriate to comment on the meaning of the two conditional entropies H(Y X) and H(X Y). Let us refer first to Eq. (26). This equation expresses the fact that the average information gained about a message, when a communication is completed, is equal to the average source information less the average uncertainty that still remains about the message. From another point of view, the quantity H(X Y) is the average additional information needed at the destination after reception to completely specify the message sent. Thus, H(X Y) represents the information lost in the channel. It is frequently called the equivocation. Let us now consider Eq. (25). This equation indicates that the information transmitted consists of the difference between the destination entropy and that part of the destination entropy that is not information about the source; thus the term H(Y X) can be considered a noise entropy added in the channel.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.12

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 1.2

INFORMATION SOURCES, CODES, AND CHANNELS Geoffrey C. Orsak, H. Vincent Poor, John B. Thomas

MESSAGE SOURCES As shown in Fig. 1.1.1, an information source can be considered as emitting a given message ui from the set {U} of possible messages. In general, each message ui will be represented by a sequence of symbols xj from the source alphabet {X}, since the number of possible messages will usually exceed the size M of the source alphabet. Thus sequences of symbols replace the original messages ui, which need not be considered further. When the source alphabet {X} is of finite size M, the source will be called a finite discrete source. The problems of concern now are the interrelationships existing between symbols in the generated sequences and the classification of sources according to these interrelationships. A random or stochastic process xi, t  T, can be defined as an indexed set of random variables where T is the parameter set of the process. If the set T is a sequence, then xt is a stochastic process with discrete parameter (also called a random sequence or series). One way to look at the output of a finite discrete source is that it is a discrete-parameter stochastic process with each possible given sequence one of the ensemble members or realizations of the process. Thus the study of information sources can be reduced to a study of random processes. The simplest case to consider is the memoryless source, where the successive symbols obey the same fixed probability law so that the one distribution p(xi) determines the appearance of each indexed symbol. Such a source is called stationary. Let us consider sequences of length n, each member of the sequence being a realization of the random variable xi with fixed probability distribution p(xi). Since there are M possible realizations of the random variable and n terms in the sequence, there must be Mn distinct sequences possible of length n. Let the random variable Xi in the jth position be denoted by Xij so that the sequence set (the message set) can be represented by {U} = Xn = {Xi1, Xi2, . . . , Xin}

i = 1, 2, . . . , M

(1)

The symbol Xn is sometimes used to represent this sequence set and is called the nth extension of the memoryless source X. The probability of occurrence of a given message ui is just the product of the probabilities of occurrence of the individual terms in the sequence so that p{ui} = p(xi1)p(xi2) . . . p{xin}

(2)

Now the entropy for the extended source Xn is H ( X n ) = − ∑ p{ui }log p{ui } = nH ( X ) xn

1.12 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(3)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.13

INFORMATION SOURCES, CODES, AND CHANNELS INFORMATION SOURCES, CODES, AND CHANNELS

1.13

as expected. Note that, if base 2 logarithms are used, then H(X) has units of bits per symbol, n is symbols per sequence, and H(Xn) is in units of bits per sequence. For a memoryless source, all sequence averages of information measures are obtained by multiplying the corresponding symbol by the number of symbols in the sequence.

MARKOV INFORMATION SOURCE The memoryless source is not a general enough model in most cases. A constructive way to generalize this model is to assume that the occurrences of a given symbol depends on some number m of immediately preceeding symbols. Thus the information source can be considered to produce an mth-order Markov chain and is called an mth-order Markov source. For an mth-order Markov source, the m symbols preceding a given symbol position are called the state sj of the source at that symbol position. If there are M possible symbols xi, then the mth-order Markov source will have Mm = q possible states sj making up the state set S = {s1, s2, … , sq}

q = Mm

(4)

At a given time corresponding to one symbol position the source will be in a given state sj. There will exist a probability p(sk sj) = pjk that the source will move into another state sk with the emission of the next symbol. The set of all such conditional probabilities is expressed by the transition matrix T, where  p11 p12   p21 p22 T = [ p jk ] =   ... ...   pq1 pq 2

... ... ... ...

p1q   p2 q   ...   pqq  

(5)

A Markov matrix or stochastic matrix is any square matrix with nonnegative elements such that the row sums are unity. It is clear that T is such a matrix since q

q

j =1

j =1

∑ pij = ∑ p(s j si ) = 1

i = 1, 2, . . . , q

(6)

Conversely, any stochastic matrix is a possible transition matrix for a Markov source of order m, where q = Mm is equal to the number of rows or columns of the matrix. A Markov chain is completely specified by its transition matrix T and by an initial distribution vector p giving the probability distribution for the first state occurring. For the memoryless source, the transition matrix reduces to a stochastic matrix where all the rows are identical and are each equal to the initial distribution vector p, which is in turn equal to the vector giving the source alphabet a priori probabilities. Thus, in this case, we have p jk = p(sk s j ) = p(sk ) = p( x k )

k = 1, 2, . . . , M

(7)

For each state si of the source an entropy H(si) can be defined by q

M

j =1

k =1

H (si ) = − ∑ p(s j si ) log p(s j si ) = − ∑ p( x k si ) log p( x k si )

(8)

The source entropy H(S) in information units per symbol is the expected value of H(si); that is, q

q

q

M

H (S ) = − ∑ ∑ p(si ) p(s j si ) log p(s j si ) = − ∑ ∑ p(si ) p( x k si ) log p( x k si ) i =1 j =1

i =1 k =1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(9)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.14

INFORMATION SOURCES, CODES, AND CHANNELS 1.14

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

where p(si) = pi is the stationary state probability and is the ith element of the vector P defined by P = [p1 p2 · · · pq]

(10)

It is easy to show, as in Eq. (8), that the source entropy cannot exceed log M, where M is the size of the source alphabet {X}. For a given source, the ratio of the actual entropy H(S) to the maximum value it can have with the same alphabet is called the relative entropy of the source. The redundancy h of the source is defined as the positive difference between unity and this relative entropy:

η =1 −

H (S ) log M

(11)

The quantity log M is sometimes called the capacity of the alphabet.

NOISELESS CODING The preceding discussion has emphasized the information source and its properties. We now begin to consider the properties of the communication channel of Fig. 1.1.1. In general, an arbitrary channel will not accept and transmit the sequence of xi’s emitted from an arbitrary source. Instead the channel will accept a sequence of some other elements ai chosen from a code alphabet A of size D, where A = {a1, a2, . . . , aD}

(12)

with D generally smaller than M. The elements ai of the code alphabet are frequently called code elements or code characters, while a given sequence of ai’s may be called a code word. The situation is now describable in terms of Fig. 1.1.2, where an encoder E has been added between the source and channel. The process of coding, or encoding, the source consists of associating with each source symbol xi a given code word, which is just a given sequence of ai’s. Thus the source emits a sequence of ai’s chosen from the source alphabet A, and the encoder emits a sequence of ai’s chosen from the code alphabet A. It will be assumed in all subsequent discussions that the code words are distinct, i.e., that each code word corresponds to only one source symbol. Even though each code word is required to be distinct, sequences of code words may not have this property. An example is code A of Table 1.2.1, where a source of size 4 has been encoded in binary code with characters 0 and 1. In code A the code words are distinct, but sequences of code words are not. It is clear that such a code is not uniquely decipherable. On the other hand, a given sequence of code words taken from code B will correspond to a distinct sequence of source symbols. An examination of code B shows that in no case is a code word formed by adding characters to another word. In other words, no code word is a prefix of another. It is clear that this is a sufficient (but not necessary) condition for a code to be uniquely decipherable. That it is not necessary can be seen from an examination of codes C and D of Table 1.2.1. These codes are uniquely decipherable even though many of the code words are prefixes of other words. In these cases any sequence of code words can be decoded by subdividing the sequence of 0s and 1s to the left of every 0 for code C and to the right of every 0 for code D. The character 0 is the first (or last) character of every code word and acts as a comma; therefore this type of code is called a comma code. TABLE 1.2.1 Four Binary Coding Schemes Source symbol

Code A

Code B

Code C

Code D

x1 x2 x3 x4

0 1 00 11

0 10 110 111

0 01 011 0111

0 10 110 1110

Note: Code A is not uniquely decipherable; codes B, C, and D are uniquely decipherable; codes B and D are instantaneous codes; and codes C and D are comma codes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.15

INFORMATION SOURCES, CODES, AND CHANNELS INFORMATION SOURCES, CODES, AND CHANNELS

1.15

In general the channel will require a finite amount of time to transmit each code character. The code words should be as short as possible in order to maximize information transfer per unit time. The average length L of a code is given by M

L = ∑ ni p( xi )

(13)

i =1

where ni is the length (number of code characters) of the code word for the source symbol xi and p(xi) is the probability of occurrence of xi. Although the average code length cannot be computed unless the set {p(xi)} is given, it is obvious that codes C and D of Table 1.2.1 will have a greater average length than code B unless p(x4) = 0. Comma codes are not optimal with respect to minimum average length. Let us encode the sequence x3x1x3x2 into codes B, C, and D of Table 1.2.1 as shown below: Code B: Code C: Code D:

110011010 011001101 110011010

Codes B and D are fundamentally different from code C in that codes B and D can be decoded word by word without examining subsequent code characters while code C cannot be so treated. Codes B and D are called instantaneous codes while code C is noninstantaneous. The instantaneous codes have the property (previously maintained) that no code word is a prefix of another code word. The aim of noiseless coding is to produce codes with the two properties of (1) unique decipherability and (2) minimum average length L for a given source S with alphabet X and probability set {p(xi)}. Codes which have both these properties will be called optimal. It can be shown that if, for a given source S, a code is optimal among instantaneous codes, then it is optimal among all uniquely decipherable codes. Thus it is sufficient to consider instantaneous codes. A necessary property of optimal codes is that source symbols with higher probabilities have shorter code words; i.e., p( xi ) > p( x j ) ⇒ ni ≤ n j

(14)

The encoding procedure consists of the assignment of a code word to each of the M source symbols. The code word for the source symbol xi will be of length ni; that is, it will consist of ni code elements chosen from the code alphabet of size D. It can be shown that a necessary and sufficient condition for the construction of a uniquely decipherable code is the Kraft inequality M

∑ D− n

i

≤1

(15)

i =1

NOISELESS-CODING THEOREM It follows from Eq. (15) that the average code length L, given by Eq. (13), satisfies the inequality L ≥ H(X)/log D

(16)

Equality (and minimum code length) occurs if and only if the source-symbol probabilities obey p(xi) = D−ni

i = 1, 2, . . . , M

(17)

A code where this equality applies is called absolutely optimal. Since an integer number of code elements must be used for each code word, the equality in Eq. (16) does not usually hold; however, by using one more code element, the average code length L can be bounded from above to give H(X)/log D ≤ L ≤ H(X)/log D + 1 This last relationship is frequently called the noiseless-coding theorem.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(18)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.16

INFORMATION SOURCES, CODES, AND CHANNELS 1.16

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

CONSTRUCTION OF NOISELESS CODES The easiest case to consider occurs when an absolutely optimal code exists; i.e., when the source-symbol probabilities satisfy Eq. (17). Note that code B of Table 1.2.1 is absolutely optimal if p(x1) = 1/2, p(x2) = 1/4, and p(x3) = p(x4) = 1/8. In such cases, a procedure for realizing the code for arbitrary code-alphabet size (D ≥ 2) is easily constructed as follows: 1. Arrange the M source symbols in order of decreasing probability. 2. Arrange the D code elements in an arbitrary but fixed order, i.e., a1, a2, . . . , aD. 3. Divide the set of symbols xi into D groups with equal probabilities of 1/D each. This division is always possible if Eq. (17) is satisfied. 4. Assign the element a1 as the first digit for symbols in the first group, a2 for the second, and ai for the ith group. 5. After the first division each of the resulting groups contains a number of symbols equal to D raised to some integral power if Eq. (17) is satisfied. Thus, a typical group, say group i, contains Dki symbols, where ki is an integer (which may be zero). This group of symbols can be further subdivided ki times into D parts of equal probabilities. Each division decides one additional code digit in the sequence. A typical symbol xi is isolated after q divisions. If it belongs to the i1 group after the first division, the i2 group after the second division, and so forth, then the code word for xi will be ai1 ai2 . . . aiq. An illustration of the construction of an absolutely optimal code for the case where D = 3 is given in Table 1.2.2. This procedure ensures that source symbols with high probabilities will have short code words and vice versa, since a symbol with probability D−ni will be isolated after ni divisions and thus will have ni elements in its code word, as required by Eq. (17).

TABLE 1.2.2 Construction of an Optimal Code; D = 3 Source symbols xi

A priori probabilities p(xi)

Step 1

2

3

Final code

x1

1/3

1

1

x2

1/9

0

1

0

x3

1/9

0

0

0

0

x4

1/9

0

–1

0

–1

x5

1/27

–1

1

1

–1

1

x6

1/27

–1

1

0

–1

1

0

x7

1/27

–1

1

–1

–1

1

–1

x8

1/27

–1

0

1

–1

0

1

x9

1/27

–1

0

0

–1

0

0

x10

1/27

–1

0

–1

–1

0

–1

x11

1/27

–1

–1

1

–1

–1

1

x12

1/27

–1

–1

0

–1

–1

0

x13

1/27

–1

–1

–1

–1

–1

–1

1

1

Note: Average code length L = 2 code elements per symbol: source entropy H(X) = 2 log2 3 bits per symbol. L=

H (X ) log2 3

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.17

INFORMATION SOURCES, CODES, AND CHANNELS INFORMATION SOURCES, CODES, AND CHANNELS

1.17

TABLE 1.2.3 Construction of Huffman Code; D = 2

The code resulting from the process just discussed is sometimes called the Shannon-Fano code. It is apparent that the same encoding procedure can be followed whether or not the source probabilities satisfy Eq. (17). The set of symbols xi is simply divided into D groups with probabilities as nearly equal as possible. The procedure is sometimes ambiguous, however, and more than one Shannon-Fano code may be possible. The ambiguity arises, of course, in the choice of approximately equiprobable subgroups. For the general case where Eq. (17) is not satisfied, a procedure owing to Huffman guarantees an optimal code, i.e., one with minimum average length. This procedure for code alphabet of arbitrary size D is as follows: 1. As before, arrange the M source symbols in order of decreasing probability. 2. As before, arrange the code elements in an arbitrary but fixed order, that is, a1, a2, . . . , aD. 3. Combine (sum) the probabilities of the D least likely symbols and reorder the resulting M − (D − 1) probabilities; this step will be called reduction 1. Repeat as often as necessary until there are D ordered probabilities remaining. Note: For the binary case (D = 2), it will always be possible to accomplish this reduction in M − 2 steps. When the size of the code alphabet is arbitrary, the last reduction will result in exactly D ordered probabilities if and only if M = D + n(D − 1) where n is an integer. If this relationship is not satisfied, dummy source symbols with zero probability should be added. The entire encoding procedure is followed as before, and at the end the dummy symbols are thrown away. 4. Start the encoding with the last reduction which consists of exactly D ordered probabilities; assign the element a1 as the first digit in the code words for all the source symbols associated with the first probability; assign a2 to the second probability; and ai to the ith probability. 5. Proceed to the next to the last reduction; this reduction consists of D + (D − 1) ordered probabilities for a net gain of D − 1 probabilities. For the D new probabilities, the first code digit has already been assigned and is the same for all of these D probabilities; assign a1 as the second digit for all source symbols associated with the first of these D new probabilities; assign a2 as the second digit for the second of these D new probabilities, etc. 6. The encoding procedure terminates after 1 + n(D − 1) steps, which is one more than the number of reductions. As an illustration of the Huffman coding procedure, a binary code is constructed in Table 1.2.3.

CHANNEL CAPACITY The average mutual information I(X; Y) between an information source and a destination was given by Eqs. (25) and (26) as I(X; Y) = H(Y) − H(Y X) = H(X) − H(X Y ) ≥ 0

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(19)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.18

INFORMATION SOURCES, CODES, AND CHANNELS 1.18

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

The average mutual information depends not only on the statistical characteristics of the channel but also on the distribution p(xi) of the input alphabet X. If the input distribution is varied until Eq. (19) is a maximum for a given channel, the resulting value of I(X; Y) is called the channel capacity C of that channel; i.e., C = max I (X; Y ) p ( xi )

(20)

In general, H(X), H(Y ), H(X Y ), and H(Y X) all depend on the input distribution p(xi). Hence, in the general case, it is not a simple matter to maximize Eq. (19) with respect to p(xi). All the measures of information that have been considered in this treatment have involved only probability distributions on X and Y. Thus, for the model of Fig. 1.1.1, the joint distribution p(xi, yj) is sufficient. Suppose the source [and hence the input distribution p(xi)] is known; then it follows from the usual conditional-probability relationship p(xi, yj) = p(xi)p(yj xi)

(21)

that only the distribution p(yj xi) is needed for p(xi yj) to be determined. This conditional probability p(yj xi) can then be taken as a description of the information channel connecting the source X and the destination Y. Thus, a discrete memoryless channel can be defined as the probability distribution xi  X and yj  Y

p(yj xi)

(22)

or, equivalently, by the channel matrix D, where  p( y1 x1 ) p( y2 x 2 ) . . . p( yN x1 )     p( y1 x 2 ) p( y2 x 2 ) . . . p( yN x 2 )  D = [ p( y j | xi )] =       p( y1 x M ) ... . . . p( yN x M ) 

(23)

A number of special types of channels are readily distinguished. Some of the simplest and/or most interesting are listed as follows: (a) Lossless Channel. Here H(X Y) = 0 for all input distribution p(xi), and Eq. (20) becomes C = max H ( X ) = log M p( xi )

(24)

This maximum is obtained when the xi are equally likely, so that p(xi) = 1/M for all i. The channel capacity is equal to the source entropy, and no source information is lost in transmission. (b) Deterministic Channel. Here H(Y X) = 0 for all input distributions p(xi), and Eq. (20) becomes C = max H (Y ) = log N p ( xi )

(25)

This maximum is obtained when the yj are equally likely, so that p(yj) = 1/N for all j. Each member of the X set is uniquely associated with one, and only one, member of the destination alphabet Y. (c) Symmetric Channel. Here the rows of the channel matrix D are identical except for permutations, and the columns are identical except for permutations. If D is square, rows and columns are identical except for permutations. In the symmetric channel, the conditional entropy H(Y X) is independent of the input distribution p(xi) and depends only on the channel matrix D. As a consequence, the determination of channel capacity is greatly simplified and can be written N

C = log N + ∑ p( y j xi ) log p( y j xi ) j =1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(26)

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.19

INFORMATION SOURCES, CODES, AND CHANNELS INFORMATION SOURCES, CODES, AND CHANNELS

1.19

This capacity is obtained when the yi are equally likely, so that p(yj) = 1/N for all j. (d) Binary Symmetric Channel (BSC). This is the special case of a symmetric channel where M = N = 2. Here the channel matrix can be written  p 1 − p D=  p  1 − p

(27)

and the channel capacity is C = log 2 − G(p)

(28)

where the function G(p) is defined as G(p) = −[p log p + (1 − p) log (1 − p)]

(29)

This expression is mathematically identical to the entropy of a binary source as given in Eq. (5) and is plotFIGURE 1.2.1 Capacity of the binary symmetric channel. ted in Fig. 1.1.3 using base 2 logarithms. For the same base, Eq. (28) is shown as a function of p in Fig. 1.2.1. As expected, the channel capacity is large if p, the probability of correct transmission, is either close to unity or to zero. If p = 1/2, there is no statistical evidence which symbol was sent and the channel capacity is zero.

DECISION SCHEMES A decision scheme or decoding scheme B is a partitioning of the Y set into M disjoint and exhaustive sets B1, B2, … , BM such that when a destination symbol yk falls into set Bi, it is decided that symbol xi was sent. Implicit in this definition is a decision rule d(yj), which is a function specifying uniquely a source symbol for each destination symbol. Let p(e yj) be the probability of error when it is decided that yj has been received. Then the total error probability p(e) is N

p(e) = ∑ p( y j ) p(e y j ) j =1

(30)

For a given decision scheme b, the conditional error probability p(e yj) can be written p(e yj) = 1 − p[d(yj) yj]

(31)

where p[d(yj) yj] is the conditional probability p(xi yj) with xi assigned by the decision rule; i.e., for a given decision scheme d(yj) = xi. The probability p(yj) is determined only by the source a priori probability p(xi) and by the channel matrix = D [p(yj xi)]. Hence, only the term p(e yj) in Eq. (30) is a function of the decision scheme. Since Eq. (30) is a sum of nonnegative terms, the error probability is a minimum when each summand is a minimum. Thus, the term p(e yj) should be a minimum for each yj. It follows from Eq. (31) that the minimumerror scheme is that scheme which assigns a decision rule d(yj) = x*

j = 1, 2, . . . , N

(32)

where x* is defined by p(x* yj) ≥ p(xi yj)

i = 1, 2, . . . , M

(33)

In other words, each yj is decoded as the a posteriori most likely xi. This scheme, which minimizes the probability of error p(e), is usually called the ideal observer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_01.qxd

10/27/04

10:19 AM

Page 1.20

INFORMATION SOURCES, CODES, AND CHANNELS 1.20

INFORMATION, COMMUNICATION, NOISE, AND INTERFERENCE

The ideal observer is not always a completely satisfactory decision scheme. It suffers from two major disadvantages: (1) For a given channel D, the scheme is defined only for a given input distribution p(xi). It might be preferable to have a scheme that was insensitive to input distributions. (2) The scheme minimizes average error but does not bound certain errors. For example, some symbols may always be received incorrectly. Despite these disadvantages, the ideal observer is a straightforward scheme which does minimize average error. It is also widely used as a standard with which other decision schemes may be compared. Consider the special case where the input distribution is p(xi) = 1/M for all i, so that all xi are equally likely. Now the conditional likelihood p(xi yj) is p( xi | y j ) =

p( xi ) p( y j xi ) p( y j )

=

p( y j xi ) Mp( y j )

(34)

For a given yj, that input xi is chosen which makes p(yj xi) a maximum, and the decision rule is d(yj) = x†

j = 1, 2, . . . , N

(35)

where x† is defined by p(yj x†) ≥ p(yj xi)

i = 1, 2, . . . , M

(36)

The probability of error becomes N  p( y j x † )  p(e) = ∑ p( y j ) 1 −  Mp( y j )   j =1

(37)

This decoder is sometimes called the maximum-likelihood decoder or decision scheme. It would appear that a relationship should exist between the error probability p(e) and the channel capacity C. One such relationship is the Fano bound, given by H(X Y) ≤ G[p(e)] + p(e) log (M − 1)

(38)

and relating error probability to channel capacity through Eq. (20). Here G(⋅) is the function already defined by Eq. (29). The three terms in Eq. (38) can be interpreted as follows: H(X Y) is the equivocation. It is the average additional information needed at the destination after reception to completely determine the symbol that was sent. G[p(e)] is the entropy of the binary system with probabilities p(e) and 1 − p(e). In other words, it is the average amount of information needed to determine whether the decision rule resulted in an error. log (M − 1) is the maximum amount of information needed to determine which among the remaining M − 1 symbols was sent if the decision rule was incorrect; this information is needed with probability p(e).

THE NOISY-CODING THEOREM The concept of channel capacity was discussed earlier. Capacity is a fundamental property of an information channel in the sense that it is possible to transmit information through the channel at any rate less than the channel capacity with arbitrarily small probability of error. This result is called the noisy-coding theorem or Shannon’s fundamental theorem for a noisy channel. The noisy-coding theorem can be stated more precisely as follows: Consider a discrete memoryless channel with nonzero capacity C; fix two numbers H and  such that 0 binary representation of decimal value -input valid only between 0-9 but can handle -15-0 but anything above 9 will default to zero -- Output : LS7seg -> seven segment display decimal representation of -Least Significant Digit of counter -MS7seg -> seven segment display decimal representation of -Most Significant Digit of Counter -- Chip : 10K70RC240-4 -- Board : Altera University Program Development Board UP2 --- NOTE : Must Compile These Files and have them -in the same folder -1) counter.vhd -2) clkdivider.vhd -3) sevensegdisplay.vhd --This program creates a program that joins all -of the subprograms compiled before --I/O pin numbers are can be predefined in the -.acf file -I/O pin number verification be seen in the -.rpt file --- ****************************************************************** -- Include Libraries LIBRARY IEEE; USE IEEE.STD_LOGIC_1164.all; -- ****************************************************************** -- Declaration of INPUT & OUTPUT variables -- ENTITY must have a reference which is the same as the file name ENTITY completesystem IS PORT( clk : IN std_logic; LSreg : IN std_logic_vector(3 MSreg : IN std_logic_vector(3 Reset_Switch : IN std_logic; LS7seg : OUT std_logic_vector(6 MS7seg : OUT std_logic_vector(6 DP7seg : OUT std_logic_vector(1 Buzer : OUT std_logic); END completesystem; -- ******************************************************************

DOWNTO 0); DOWNTO 0); DOWNTO 0); DOWNTO 0); DOWNTO 0);

FIGURE 4.1.10 VHDL code for the complete system component that links all the other components together.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_04.qxd

10/27/04

10:35 AM

Page 4.11

DESIGN OF DIGITAL SYSTEMS USING CAD TOOLS DESIGN OF DIGITAL SYSTEMS USING CAD TOOLS

-- ****************************************************************** -- The actual body of the program -ARCHITECTURE arch OF completesystem IS -- component declaration -- files which are interconnected to make up the entire file -- include: -file name -inputs -outputs COMPONENT clkdivider PORT( clk_high : IN STD_LOGIC; clk_low : OUT STD_LOGIC); END COMPONENT; COMPONENT counter PORT( clk update LSDigit MSDigit Reset_Flag LSDigit_out MSDigit_out Buzer END COMPONENT; COMPONENT sevensegdisplay PORT( LSinput MSinput LSdisplay MSdisplay DPdisplay END COMPONENT;

: : : : : : : :

IN IN IN IN IN OUT OUT OUT

STD_LOGIC; STD_LOGIC; STD_LOGIC_VECTOR(3 STD_LOGIC_VECTOR(3 STD_LOGIC; STD_LOGIC_VECTOR(3 STD_LOGIC_VECTOR(3 STD_LOGIC);

: : : : :

IN IN OUT OUT OUT

STD_LOGIC_VECTOR(3 STD_LOGIC_VECTOR(3 STD_LOGIC_VECTOR(6 STD_LOGIC_VECTOR(6 STD_LOGIC_VECTOR(1

DOWNTO 0); DOWNTO 0); DOWNTO 0); DOWNTO 0);

DOWNTO DOWNTO DOWNTO DOWNTO DOWNTO

0); 0); 0); 0); 0));

-- interconnection signals -- signals that connect outputs to inputs within the system SIGNAL pulse : STD_LOGIC; SIGNAL BCD_low : STD_LOGIC_VECTOR(3 DOWNTO 0); SIGNAL BCD_high : STD_LOGIC_VECTOR(3 DOWNTO 0); BEGIN -- mapping the inputs and outputs to and from each subsystem clkdivider_unit: clkdivider PORT MAP(clk_high=>clk, clk_low=>pulse); counter_unit: counter PORT MAP(clk=>clk, update=>pulse, Reset_Flag=>Reset_Switch, LSDigit=>LSreg, MSDigit=>MSreg, LSDigit_out=>BCD_low, MSDigit_out=>BCD_high, Buzer=>Buzer); sevensegdisplay_unit: sevensegdisplay PORT MAP(LSinput=>BCD_low, MSinput=>BCD_high, LSdisplay=>LS7seg, MSdisplay=>MS7seg, DPdisplay=>DP7seg); END arch; -- ****************************************************************** FIGURE 4.1.10 (Continued) VHDL code for the complete system component that links all the other components together.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

4.11

Christiansen_Sec_04.qxd

10/27/04

10:35 AM

Page 4.12

DESIGN OF DIGITAL SYSTEMS USING CAD TOOLS 4.12

COMPUTER-ASSISTED DIGITAL SYSTEM DESIGN

EXAMPLE DESIGN A complete example design will be shown in this section. The example will be the remaining components of the kitchen timer problem. The complete system code has already been shown in Fig. 4.1.10. The remainder of the components shall be shown in Figs. 4.1.11 to 4.1.13. -- ****************************************************************** -- Program Name : clkdivider.vhd -- File Name : desktop\new folder\altera\kitchen timer -- Programed By : Brian Fast -- Date : April 2, 2004 -- Purpose : Clock Divider -take a high frequency clock and output a -lower frequency cycle -- Input : currently configured for system clock (25.175 MHz) -can be changed by changing the constant values -freq_in = input frequency/(2*desired output frequency) -freq_in_switch = (input frequency/(2*desired output frequency))/2 -- Ouput : currently configured for clock signal at (60 Hz) -- Chip : 10K70RC240-4 -- Board : Altera University Program Development Board UP2 -- Software : Altera Max+Plus v10.22 --- NOTE : This file will be used as a black box within -another file so the I/O pins will not be -set within this file. The I/O signals will -be interconnections set within the main program -which is where the I/O pins will be set --Num Description By Date ---------------------------------------------------------------------------------------------------------------------------------------------------- Status : 1.0 Began Initial Program BRF 4/2/2004 -The program is working correctly -The program takes an input frequency -and outputs a lower frequency -which is dependent on the values set -in the constants -freq_in & freq_in_switch -these constants are dependent on the -input frequency and the desired -output frequency -the input output frequency is not -determined but is dependent on the -input frequency output frequency -size of the registers and the -internal speed of the code --------------| freq | -high frequency ---->| divider |---> lower frequency -system clock | | new desire frequency -[clk_high] -----------[clk_low] ---- ****************************************************************** FIGURE 4.1.11 VHDL code for the clock divider component.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_04.qxd

10/27/04

10:35 AM

Page 4.13

DESIGN OF DIGITAL SYSTEMS USING CAD TOOLS DESIGN OF DIGITAL SYSTEMS USING CAD TOOLS

4.13

-- Include Libraries LIBRARY IEEE; USE IEEE.STD_LOGIC_1164.all; USE ieee.std_logic_unsigned.ALL; -- ****************************************************************** -- Declaration of INPUT & OUTPUT variables -- ENTITY must have a reference which is the same as the file name ENTITY clkdivider IS PORT( clk_high : IN STD_LOGIC; clk_low : OUT STD_LOGIC); END clkdivider; -- ****************************************************************** -- ****************************************************************** -- The actual body of the program -- This routine is the meat of the frequency divider ARCHITECTURE arch OF clkdivider IS -- Set constant by the equation below -- freq_in = (the input frequency/(2*desired frequency)) -- the max freq_in is dependent on the output frequency CONSTANT freq_in : integer := 12587500; -- Set constant by the equation below -- freq_in_switch = (the input frequency/desired frequency)/2 CONSTANT freq_in_switch : integer := 6293750; -- used for testing --CONSTANT freq_in : integer := 34215; --CONSTANT freq_in_switch : integer := 17107; -- temporary registers used to keep track of how many input signals -- have been input to the system SIGNAL count_now: std_logic_vector(27 DOWNTO 0); SIGNAL count_next: std_logic_vector(27 DOWNTO 0); BEGIN PROCESS(clk_high) BEGIN -- increments count_next each time the input signal -- goes high -- keeps tracked of the number of cycles input by the system clock -- via clk_high if(clk_high ='1' AND clk_high'EVENT) THEN count_next | counter | -[MSDigit] (4 bits) ---->| 0|---> [Buzer] (1 bit) -[Reset_Flag] (1 bit) ---->| | --------------Currently this program is decrementing a 2 seg -BCD counter from the value loaded down to -zero. The program begins when the Reset_Flag -is pulled high and then low. The program -continues to decrement the value until it -reaches zero zero. Then the value remains -zero zero until a new value is loaded. --- ****************************************************************** -- Include Libraries LIBRARY IEEE; USE IEEE.STD_LOGIC_1164.all; USE ieee.std_logic_unsigned.ALL; -- ****************************************************************** -- Declaration of INPUT & OUTPUT variables -- ENTITY must have a reference which is the same as the file name ENTITY counter IS PORT(

clk update LSDigit MSDigit Reset_Flag LSDigit_out MSDigit_out Buzer

: : : : : : : :

IN IN IN IN IN OUT OUT OUT

STD_LOGIC; STD_LOGIC; STD_LOGIC_VECTOR(3 STD_LOGIC_VECTOR(3 STD_LOGIC; STD_LOGIC_VECTOR(3 STD_LOGIC_VECTOR(3 STD_LOGIC);

DOWNTO 0); DOWNTO 0); DOWNTO 0); DOWNTO 0);

END counter; -- ****************************************************************** -- ****************************************************************** -- The actual body of the program -- This routine is the meat of the frequency divider ARCHITECTURE arch OF counter IS SIGNAL LSDreg_now : std_logic_vector(3 DOWNTO 0); SIGNAL LSDreg_next : std_logic_vector(3 DOWNTO 0); SIGNAL MSDreg_now : std_logic_vector(3 DOWNTO 0); SIGNAL MSDreg_next : std_logic_vector(3 DOWNTO 0); SIGNAL Done_Flag : std_logic; BEGIN ----------------------------------------------------------------- Counter Routine -- counts down the Least Significant Digit until it gets to zero -- then decrements the Most Significant Digit -- then decrements resets Least Significant Digit back to nine -- until it gets to zero zero FIGURE 4.1.12 (Continued ) VHDL code for the counter component.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

4.15

Christiansen_Sec_04.qxd

10/27/04

10:35 AM

Page 4.16

DESIGN OF DIGITAL SYSTEMS USING CAD TOOLS 4.16

COMPUTER-ASSISTED DIGITAL SYSTEM DESIGN

-- then the buzzer goes off -- if the initial value set for the input is greater then 9 -- then the initial value will be set to 9 PROCESS(clk) BEGIN IF(clk = '1' AND clk'EVENT) THEN -- checks to see if the value is greater then IF(MSDreg_now > 9 OR LSDreg_now > 9) THEN IF(MSDreg_now > 9) THEN MSDreg_next 9) THEN LSDreg_next 0) THEN LSDreg_next G0G2. These circuits have the property of low sensitivity at the expense of two amplifiers.

SALLEN AND KEY NETWORKS The circuits of Figs. 10.3.12, 10.3.13, and 10.3.14 are low-pass, high-pass, and bandpass circuits, respectively, having a positive gain K. Design of any of these circuits requires choice of suitable linear and quadratic denominator factors, transformation and frequency scaling, and coefficient matching. Since there are more elements

FIGURE 10.3.12 A low-pass active filter network with gain K greater than 0.

FIGURE 10.3.13 A high-pass active filter network with gain K greater than 0.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/29/04

11:05 AM

Page 10.51

ACTIVE FILTERS ACTIVE FILTERS

10.51

FIGURE 10.3.14 A bandpass active filter network with gain K greater than 0.

to be specified than there are constraints, two elements may be chosen arbitrarily. Often, K = 1 or K = 2 leads to a good network. For Fig. 10.3.12, V2 (s ) = V1

K /( R1R2C1C2 )  1 1 1 1  + + s + (1 − K ) s + C R R R C R C R 2 2 1 1 2 1 1 2C1C2 

(29)

2

For Fig. 10.3.13, V2 (s ) = V1

Ks 2

(30)

 1 1 1  1 + + s2 + (1 − K ) s+ R1C1 R2C2 R2C1  R1R2C1C2 

For Fig. 10.3.14,

V2 (s ) = V1

Ks R1C2  (1 − K ) 1 1 1 1  1  1  1 + + + + s2 +  s +  R + R  R C R C R C R C R C R C C 3 2 1 1 2 1 1 2  1 2  2 2 3 1 2

(31)

CHAIN NETWORK Figure 10.3.15 shows a chain network that realizes low-pass functions and is easily designed. For this circuit,

ω1ω 2ω 3  ω n V2 (s ) = n V1 s + ω1s n−1 + ω1ω 2 s n− 2 +  + ω1ω 2ω 3  ω n

(32)

wi = 1/RiCi

(33)

where

As an example, consider a third-order Bessel filter, for which V2 15 (s ) = 3 2 V1 s + 6s + 15s + 15

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(34)

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.52

ACTIVE FILTERS 10.52

FILTERS AND ATTENUATORS

FIGURE 10.3.15 An RC-unity-gain amplifier realization of an active low-pass filter.

Choose 1/R1C1 = 6, 6(1/R2C2) = 15, and 15(1/R3C3) = 15. If all C’s are set to 1.0, then R1 = 1/6, R2 = 2/5, and R3 = 1. Use frequency and impedance scaling as required.

LEAPFROG FILTERS Also called active ladders or multiple feedback filters, these circuits use the tabulated element values from Table 10.1.10 to develop a set of active networks that have the sensitivity characteristics of passive ladders. The process may be extended from low-pass to bandpass networks using the transformation from prototype to bandpass filter disc under “Bandpass Filter” and techniques that will be discussed in this paragraph. The term “leapfrog” was suggested by Girling and Good,28 the inventors, because of the topology. Figure 10.3.16a shows a conventional fourth-order low-pass prototype network, and Fig. 10.3.16b shows a block diagram of a simulation with the same equations, which follow, using Laplace notation. In writing these equations and preparing the block diagram, current terms have been multiplied by an arbitrary constant R so that all variables appear to have the dimensions of voltage. This simplifies the block diagram and later examples. I1 = (V1 – V3)[R/(R1 + sL1)] V3 = (RI1 – RI3)(1/sC2R)

FIGURE 10.3.16 (a) Low-pass prototype ladder filter; (b) block-diagram simulation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/29/04

3:01 PM

Page 10.53

ACTIVE FILTERS ACTIVE FILTERS

RI3 = (V3 – V2)(R/sL3) V2 = RI 3

10.53

(35)

1 R (1/R4 + sC4 )

Though shown for a specific case, the technique is general and may be extended for any order of ladder network. In the simulation it should be noted that the algebraic signs of the blocks alternate. This variation is important in the realization. In Fig. 10.3.16b, the currents are simulated by voltages, and this suggests the use of operational amplifiers as realization elements. The blocks in simulation require integrations, which the readily achieved with operational amplifiers, resistors, and capacitors. Figure 10.3.17 shows suitable combinations that will realize integrators, both inverting and noninverting, and also lossy integrators, which have a pole in the left half plane, rather than at the origin. Bruton25 shows that, for integration, the circuit of Fig. 10.3.17b has superior performance compared with Fig. 10.3.17a when the imperfections of nonideal operational amplifiers are considered.

FIGURE 10.3.17 Building blocks for leapfrog filters: (a) noninverting integrator, for which V2 /V1 = R4/sC2R1R3; (b) noninverting integrator, for which V2/V1 = R4/sC2R1R3; (c) lossy, summing integrator to realize (V1 – V3)R/(R1 + sL1) = −RI1; (d) lossy, noninverting integrator to realize V2/RI4 = 1/R(1/R4 + sC4).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.54

ACTIVE FILTERS 10.54

FILTERS AND ATTENUATORS

FIGURE 10.3.18 Two arrangements of a leapfrog low-pass circuit: (a) block-diagram arrangement; (b) ladder arrangement.

In Fig. 10.3.18, the combination of these blocks into a circuit is shown. In the preparation of this drawing, the integrator of Fig. 10.3.16b has been used, and the drawing is given in two forms. Figure 10.3.18a follows from the simulation, while Fig. 10.3.18b is a rearrangement that emphasizes the ladder structure. The design of a low-pass leapfrog ladder may be summarized in these steps. 1. Select a normalized low-pass filter from Table 10.1.10 2. Identify the integrations represented by inductors, capacitors, and series resistor-inductor or parallel resistorcapacitor combinations. For each, determine an appropriate block diagram. 3. Connect together, using inverters, summers, and gain adjustment as needed. 4. Apply techniques of frequency and magnitude scaling to achieve a practical circuit.

BANDPASS LEAPFROG CIRCUITS The technique of the previous section may be extended to bandpass circuits. The basic idea follows from the low-pass to bandpass transformation introduced in “Bandpass Filters” and from the recognition that it is possible to build second-order resonators from operational amplifiers, capacitors, and resistors. When the transformation is applied to an inductor, the circuit of Fig. 10.3.19a results. The resistor is added to allow for losses or source/load resistors that may be present. Similarly, the transformation applied to a

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/29/04

3:01 PM

Page 10.55

ACTIVE FILTERS ACTIVE FILTERS

10.55

capacitor yields Fig. 10.3.19b. It is to be noted that the forms of the equations are identical, and since the leapfrog technique makes use of simulation, the realizations will be similar. The necessary equations are given by Eq. (36). Figure 10.3.20 shows an active simulation of a resonant circuit. This circuit is similar to that of Fig. 10.3.8, though the first two stages are interchanged. The new circuit has the advantage that both inverted and noninverted resonant outputs are available. Further, the input allows for summing operations, which may be needed in the leapfrog realization. Appropriate equations are given by Eq. (37). Ys (s) =

FIGURE 10.3.19 Passive resonant circuits: (a) series resonator; (b) parallel resonator.

Vo 2 (s) = −Vo1 (s) =

Z p (s ) =

(1 /Ls )s s + (Rs /Ls ) s + 1 /LsCs 2

(1 /C p )s

(36)

s 2 + (1 /R pC p )s + 1 /L pC p

(1 / R3C1 )s s 2 + (1 / R1C1 )s + 1 / R2 R4C1C2

(V11 + V12 )(s)

(37)

The implementation of this circuit is substantially the same as that of Fig. 10.3.18, with the resonators being used as the blocks of the simulation. Since both inverted and noninverted signals are available, the one needed is chosen. Table 10.3.2 gives the parameters of the active resonators in terms of the transformed series or parallel resonant circuits. Frequency and magnitude scaling may be done either before or after the elements of the active resonator are determined, but it generally is more convenient to do it afterward. An example is given to show this technique. It begins with a third-order Butterworth normalized low-pass filter, Fig. 10.3.21a, which has been taken from Table 10.1.10. The circuit is transformed to a bandpass network

FIGURE 10.3.20 Active resonator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.56

ACTIVE FILTERS 10.56

FILTERS AND ATTENUATORS

TABLE 10.3.2 Resonator Design Relationships* Circuit parameters from Fig. 10.3.20 R2 = R4 = R R1 R3 R5

Series circuit prototype values from Fig. 10.3.19a

Parallel circuit prototype values from Fig. 10.3.19b

(1 /C ) Ls Cs

(1 /C ) L pC p

R

R C p /L p

R

Ls /C s

R C p /L p

Ls /C s

Choose any convenient value

*In this table it is presumed that C = C = C and that this is chosen to be some convenient value. It is further presumed 1 2 that R2 = R4.

for which the center frequency w0 is 1.0 rad/s and the bandwidth b′ is 0.45 rad/s, corresponding to upper and lower half-power frequencies of 1.25 and 0.80 rad/s. This result is shown as Fig. 10.3.21b. For the first and third stages of the circuit, application of the equations from Table 10.3.2 shows that R1 = R3 = 20/9 Ω, and that all other components have unit value. For the second stage, R1 is infinite, as indicated by the table, and R3 = 9/40 Ω. Other components have unit value. The complete circuit is shown as Fig. 10.3.21c. This circuit has been left in normalized form. Impedance and frequency denormalization techniques must be used to achieve reasonable values.

FIGURE 10.3.21 Leapfrog active resonator realization: (a) low-pass prototype; (b) bandpass transformation, w¢0 = 1.0b′ = 0.45; (c) complete circuit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.57

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 10.4

SWITCHED CAPACITOR FILTERS Edwin C. Jones, Jr., Harry W. Hale

Switched capacitor filters, also known as analog sampled data filters, result from a new technology that builds on the passive network theory of Darlington and implements the circuits in active-network integrated-circuit forms. Essentially, the switched capacitor replaces the resistor in operational-amplifier circuits, including the resonator. Early work was reported by Allstot, Broderson, Fried, Gray, Hosticka, Jacobs, Kuntz, and others. Huelsman12 and Van Valkenburg16 give additional information. Consider the circuit of Fig. 10.4.1a and the two-phase clock signal of Fig. 10.4.1b. The circuit has two MOS switches and a capacitor C. The clock cycles the MOS switches between their low- and high-resistance states. In the analysis that follows, it is assumed that the clock speed is sufficiently high that a simplified analysis is valid. It is also assumed that the Nyquist sampling theorem is satisfied. It is possible to use discrete circuit analysis and z transforms if some of the approximations are not met. Let the clock be such that switch A is closed and B is open. This may be modeled by Fig. 10.4.1c. The source will charge the capacitor to V1. When the clock cycles so that B is closed and A is open, the capacitor will discharge toward V2, transferring a charge qC = C(V1 – V2) This will require a time TC = 1/fC, yielding an average current iav = C(V1 – V2)/TC corresponding to a resistor Req = (V1 – V2)/iav, or Req = TC /C = 1/(CfC)

(1)

Figure 10.4.2 shows a conventional integrator, a damped integrator, and several sum and difference integrators, along with realizations and transfer functions of circuits implemented with switched capacitors. It is noted that the transfer functions are functions of the ratios of two capacitors, and this fact makes them useful. It is possible in integrated-circuit design to realize the ratio of two capacitors with high precision, leading to accurate designs of filters. With similar techniques it is possible to realize many of the second-order circuits given in earlier sections. Possibly the most important application is in the realization of leapfrog filters. As discussed in the previous section, leapfrog filters use integrators to realize the resonators that are the basic building blocks. In this technology, the resistors of the resonators are replaced by switched capacitors. In essence, the technique is to realize the circuit with resonators, as was done in Fig. 10.3.21, and then to replace the resistors with switched capacitors. Though slightly less complex, the technique for low-pass filters is similar. Consider the low-pass prototype filter of Fig. 10.4.3a. A simulation is shown in Fig. 10.4.3b. This simulation has equations that are identical with those of the prototype. While the simulation is similar to that of Fig. 10.3.16; two important differences may be noted. The first is that the termination resistors are separate,

10.57 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.58

SWITCHED CAPACITOR FILTERS 10.58

FILTERS AND ATTENUATORS

FIGURE 10.4.1 Integrators with switched capacitor realizations: (a) conventional integrator; (b) damped or lossy integrator; (c) summing integrator; (d) difference integrator.

rather than being incorporated with the input and output elements. The second is that all the elements have positive signs, and the amplifiers used are different types. This is more convenient. Figure 10.4.3c shows a switched capacitor equivalent for the low-pass filter. The equations for the simulation and for the development of the switched capacitor version follow. RI3 = (V1 – V4)(R/R1) V4 = I5(1/sC3) = (I3 – I6)(R)(1/sC3R) RI6 = (V4 – V6)(R/sL4) V2 = R(I6 – I8)(1/sC5R) RI8 = V2(R/R2)

(2)

As was done previously, the current equations have been nominally multiplied by R so that all terms appear to be voltages. In practice, R may be set to 1.0 as scaling will take care of it.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.59

SWITCHED CAPACITOR FILTERS

FIGURE 10.4.2 Development of equivalent circuit for a switched capacitor: (a) double MOS switch; (b) two-phase clock; (c) switch A closed; (d) switch B closed; (e) representation; ( f ) double-pole double-throw switch; and (g) representation.

FIGURE 10.4.3 Low-pass switched capacitor filter development; (a) low-pass prototype and definition of equation symbols; C3, L4, and C5 would be obtained from Table 10.1.10; (b) simulation of low-pass prototype; R may be set to 1.0; (c) switched capacitor equivalent network.

10.59 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.60

SWITCHED CAPACITOR FILTERS 10.60

FILTERS AND ATTENUATORS

The next step is to determine the capacitor ratios in the final simulation. From Fig. 10.4.2d it may be seen that a typical integrator term is given by fC V2 (s) = c 1 V1 (s) − V0 (s) sC2 Similar results are obtained for the remaining integrators. The prototype values were obtained from Table 10.1.10, including C3, L4, and C5 for this circuit. The similarity of terms then suggests that C3 = (1/fC)(C23/C13) C5 = (1/fC)(C25/C15)

(3)

L4 = (1/fC)(C24/C14)

(4)

Extension to the inductors shows that

As used here, C3, L4, and C5 are prototype values, but they may be magnitude- and frequency-scaled as desired to achieve realistic values. In Eqs. (3) and (4), the ratios C2/C1 are computed after the clock speed is known, and the second subscripts on both numerator and denominator denote which prototype the ratio has been derived from. In general they will differ for each element. In design, it is likely that one of these would be fixed and have a common value for all integrators, thus allowing the other capacitor to vary in each case. Figure 10.4.3c shows the circuit that results. It uses the difference-type integrator of Fig. 10.4.1d. It also shows a method for handling the terminations. It should be noted that the clock phases are adjusted so that alternate integrators have open and closed resistors at a given instant. This is necessary to avoid delay problems. The extension of this technique to bandpass filters is a matter of combining the principles of leapfrog filters and switched capacitor implementation of resistors. From the specifications, a transformation to equivalent low-pass requirements is made, and the low-pass prototype is chosen. This prototype is transformed to a bandpass network, an appropriate simulation is developed, and the network is developed using integrators. Finally, scaling in frequency and magnitude is needed. It is desirable to test these simulations using computer programs designed for analysis of sampled data networks.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.61

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 10.5

CRYSTAL, MECHANICAL, AND ACOUSTIC COUPLED-RESONATOR FILTERS Edwin C. Jones, Jr., Harry W. Hale

In applications such as single-sideband communications it is often necessary to have a bandpass filter with a bandwidth that is a fractional percentage of the center frequency and in which one or both transition regions are very short. Meeting such requirements usually requires a filter in which the resonators are not electrical. Two types of resonator are quartz crystals and mechanical elements, such as disks or rods. Transducers from the electric signal to the mechanical device, output transducers, and resonator-coupling elements are needed. Crystal filters include resonators made from piezoelectric quartz crystals. The transducers are plates of a conductor deposited on the appropriate surfaces of the crystal, and coupling from one crystal to the next is electrical. The center frequency depends on the size of the crystal, its manner of cutting, and the choice of frequency determining modes of oscillation. It can vary from about 1.0 kHz to 100 MHz. If extreme care is taken, equivalent quality factors (Q’s) can be greater than 100,000. These filters can also be very stable with regard to temperature and age. Mechanical filters use rods or disks as resonating elements, which are coupled together mechanically, usually with wires welded to the resonators. The transducers are magnetostrictive. The frequency range varies from as low as 100 Hz to above 500 kHz. Quality factors above 20,000 are possible and, with proper choice of alloys, temperature coefficients of as low as 1.0 ppm/°C are possible. Acoustic filters use a combination of crystal and mechanical filter principles. The resonators are monolithic quartz crystals; the transducers are similar to those of crystal filters, but the coupling is mechanical (referred to as acoustic coupling). These filters have many of the properties of crystal filters, but the design techniques have much in common with those of mechanical filters. Coupled-resonator filters are usually described in terms of an electric equivalent circuit. The direct or mobility analogy (mass to capacitance, friction to conductance, and springs to inductance) is more useful, because the “across” variables of velocity and voltage are analogous, as are the “through” variables of force and current. Equivalent capacitances or inductances and center frequencies are among the common parameters specified for filter elements. The following paragraphs discuss, in general terms, the design procedure used for coupled-resonator filters, the equivalent circuits used, and some network transformations that enable the designer to implement the design procedure. References 6, 8, and 27 give much additional information, and in particular, Ref. 27 contains an extensive discussion and bibliography. Manufacturer’s catalogs are a good source of current data.

10.61 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.62

CRYSTAL, MECHANICAL, AND ACOUSTIC COUPLED-RESONATOR FILTERS 10.62

FILTERS AND ATTENUATORS

COUPLED-RESONATOR DESIGN PROCEDURE The insertion-loss low-pass prototype filters can be used to design coupled-resonator bandpass filters. Five steps can be identified in the process, though in some cases the dividing lines become indistinct. 1. Transform the bandpass specifications to a low-pass prototype, using Eq. (8). This will take the center frequency to w = 0 and, usually, the band edge to w = 1. 2. Choose the appropriate low-pass response, e.g., Chebyshev, elliptic, or Butterworth, that meets the transformed specifications. Zeros of transmission are fixed at this time. From this characteristic function determine the transfer function that is needed. The tables presented earlier may be useful. 3. Determine the short-circuit y or open-circuit z parameters from the transfer function. 4. If possible, look up or synthesize the appropriate ladder or lattice network needed. At this point, it is still a low-pass prototype. The technique chosen may depend on the expected form of the final network. 5. Use Fig. 10.2.7 to transform the network into a bandpass network and then use network theorems to adjust the network to a configuration and a set of element values that is practical, i.e., one that matches the resonators. This process is not one in which success is assured. It may require a variety of attempts before a suitable design is achieved. Equivalent circuits and network theorems are summarized in the following paragraphs.

EQUIVALENT CIRCUITS The most common equivalent circuit for a piezoelectric crystal shows a series-resonant RLC circuit in parallel with a second capacitor, as shown in Fig. 10.5.1. The parallel capacitor CP is composed of the mounting hardware and electric plates on the crystal. In practice, the ratio CP /C cannot be reduced below about 125, but it may be increased if needed. When a filter contains more than one crystal, the coupling is electrical, usually with capacitors. Mechanical filters have an equivalent circuit, as indicated in Fig. 10.5.2. The resonant circuits L0, CR represent the transducer magnetostrictive coils and their tuning capacitances. (In cases of small RL, it may be more accurate to place CR in series with L0.) The resonant circuit L1, C1, R1 and Ln, Cn, Rn include the motional parameters of the transducers. Elements L2, C2, R2, . . . , Ln–1, Cn–1, Rn–1 represent the motional parameters of the resonant eleFIGURE 10.5.1 Equivalent circuit for piezoelectric ments, and L12, . . . , Ln–1,n represent the compliances of the crystal. Because coupling is electrical, a one-port reprecoupling wires. sentation is sufficient.

FIGURE 10.5.2 Equivalent circuit for a mechanical filter. A two-port representation allows an electric equivalent circuit for the entire filter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.63

CRYSTAL, MECHANICAL, AND ACOUSTIC COUPLED-RESONATOR FILTERS CRYSTAL, MECHANICAL, AND ACOUSTIC COUPLED-RESONATOR FILTERS

10.63

FIGURE 10.5.3 Equivalent circuit for a monolithic crystal or acoustic filter. The one-to-one ideal transformer models the 180° phase shift observed in these filters.

The acoustic filter is represented, after substantial development, by the circuit shown in Fig. 10.5.3. The development has made the circuit easy to use, but the association between the electrical elements and the filter elements is less apparent than in the previous circuits. The ideal transformer at the output accounts for the 180° phase shift observed in these filters. In some analyses, it may be omitted.

NETWORK TRANSFORMATIONS In the process of changing a bandpass circuit to meet the configuration of the equivalent circuit of a coupled resonator a variety of equivalent networks may be useful. At one step negative elements may appear. These can be absorbed later in series or parallel with positive elements so that the overall result is positive. The impedance inverters of Fig. 10.5.4 can be used to invert an impedance, as indicated. Over a very narrow frequency range they can often be approximated with three capacitors, two of which are negative. An inverter can be used to convert an inductance into a capacitance provided the negative elements can then be absorbed. Other similar reactive configurations can also be used. Lattice networks (Fig. 10.5.5) are often used in crystal filters. If the condition prevailing in Fig. 10.5.6 exists, the equivalent can be used in either direction to affect a change. In particular, the ladder can be transformed into a lattice, which then has the crystal equivalent circuit. Two Norton transformations and networks derived from them are shown in Figs. 10.5.7 and 10.5.8. They lead to negative elements, and it is expected that they will later be absorbed into positive elements. Humpherys8 gives another derived Norton transformation that can be used to reduce inductance values. It changes the impedance level on one side of the network. When this is applied to a symmetrical network, the new impedance levels will eventually become directly connected, so that no transformer is needed.

FIGURE 10.5.4 Reactive impedance inverters: (a) T inverter; (b) T inverter with load Z; Zin = X2/Z; (c) p inverter.

FIGURE 10.5.5 Symmetrical lattice. The dotted diagonal line indicates a second Zb; the dotted horizontal line, a second Za.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.64

CRYSTAL, MECHANICAL, AND ACOUSTIC COUPLED-RESONATOR FILTERS 10.64

FILTERS AND ATTENUATORS

FIGURE 10.5.6 Lattice and ladder: (a) general lattice and equivalent circuit; (b) application to crystal filters.

n −1 Z n Z/n

Z

( 1n− n )

1:n

Z

Z

2

1:n

Z /n

1 Z 1−n 1 Z n(n −1)

(

( a)

Za /n Za

(a)

)

Za /n 1:n

Za Zb /n

Zb

Zb

1:n

Zb /n

n =1 +

Za Zb

( b)

FIGURE 10.5.7 Norton’s first transformation and a derived network.

n=

Zb Za + Z b (b)

FIGURE 10.5.8 Norton’s second transformation and a derived network.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.65

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 10.6

DIGITAL FILTERS* Arthur B. Williams, Fred J. Taylor

RELATION TO ANALOG FILTERS Digital filters provide many of the same frequency selective services (low, high, bandpass) expected of analog filters. In some cases, digital filters are defined in terms of equivalent analog filters. Other digital filters are designed using rules unique to this technology. The hardware required to fabricate a digital filter are basic digital devices such as memory and arithmetic logic units (ALUs). Many of these hardware building blocks provide both high performance and low cost. Coefficients and data are stored as digital computer words and, as a result, provide a precise and noise free (in an analog sense) signal processing medium. Compared to analog filters, digital filters generally enjoy the following advantages: 1. They can be fabricated in high-performance general-purpose digital hardware or with application-specific integrated circuits (ASIC). 2. The stability of certain classes of digital filters can be guaranteed. 3. There are no input or output impedance matching problems. 4. Coefficients can be easily programmed and altered. 5. Digital filters can operate over a wide range of frequencies. 6. Digital filters can operate over a wide dynamic range and with high precision. 7. Some digital filters provide excellent phase linearity. 8. Digital filters do not require periodic alignment and do not drift or degrade because of aging.

DATA REPRESENTATION In an analog filter, all signals and coefficients are considered to be real or complex numbers. As such, they are defined over an infinite range with infinite precision. In the analog case, filter coefficients are implemented with lumped R, L, C, and amplifier components of assumed absolute precision. Along similar lines, the designs of digital filters typically begin with the manipulation of idealized equations. However, the implementation of a digital filter is accomplished using digital computational elements of finite precision (measured in bits). Therefore, the analysis of a digital filter is not complete until the effects of finite precision arithmetic has been

*This

section is based on the author’s Electronic Filter Design Handbook, 3rd ed., McGraw-Hill, 1995.

10.65 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.66

DIGITAL FILTERS* 10.66

FILTERS AND ATTENUATORS

determined. As a result, even though there has been a significant sharing of techniques in the area of filter synthesis between analog and digital filters, the analyses of these two classes of filters have developed separate tools and techniques. Data, in a digital system, are represented as a set of binary-valued digits. The process by which a real signal or number is converted into a digital word is called analog-to-digital conversion (ADC). The most common formats used to represent data are called fixed and floating point (FXP and FLP). Within the fixed-point family of codes, the most popular are binary-coded decimal sign magnitude (SM) and diminished radix (DR) codes. Any integer X such that |X| < 2n–1 has a unique sign-magnitude representation, given by X = Xn – 1: (2n – 2 Xn – 2 + … + 2X1 + X0)

(1)

where Xi is the ith bit and X0 is referred to as the least significant bit (LSB). Similarly Xn – 2 is called the most significant bit (MSB) and Xn – 1 is the sign bit. The LSB often corresponds to a physically measurable electrical unit. For example, if a signed 12-bit ADC is used to digitize a signal whose range is ±15 V, the LSB represents a quantization step size of Q = 30 V (range)/212 – bits = 7.32 mV/bit. Fractional numbers are also possible simply by scaling X by a power of 2. The value of X' = X/2m has the same binary representation of X except that the m LSBs are considered to be fractional bits.

SIGNAL REPRESENTATION An analog filter manipulates real signals of assumed infinite precision. In a discrete system, analog signals of assumed infinite precision are periodically sampled at a rate of f samples per second. The same period is therefore given by ts = 1/fs second(s). A string of contiguous samples is called a time series. If the samples are further processed by an ADC, a digital time series results. A digital filter can be used to manipulate this time series using digital technology. The hardware required to implement such a filter is the product of the microelectronics revolution.

SPECTRAL REPRESENTATION Besides representing signals in the continuous or discrete time domain, signals can also be modeled in the frequency domain. This condition is called spectral representation. The principal tools used to describe a signal in the frequency domain are: (1) Fourier transforms, (2) Fourier series, and (3) discrete Fourier transforms (DFT). A Fourier transform will map an arbitrary transformable signal into a continuous frequency spectrum consisting of all frequency components from –∞ to +∞. The Fourier transform is defined by an indefinite integral equation whose limits range from –∞ to +∞. The Fourier series will map a continuous but periodic signal of period T [i.e., x(t) = x(t + kT) for all integer values of k] into a discrete but infinite spectrum consisting of frequency harmonics located at multiples of the fundamental frequency 1/T. The Fourier series is defined by a definite integral equation whose limits are [0, T]. The discrete Fourier transform differs from the first two transforms in that it does not accept data continuously but rather from a time series of finite length. Also, unlike the first two transforms, which produce spectra ranging out to ±∞ Hz, the DFT spectrum consists of a finite number of harmonics. The DFT is an important and useful tool in the study of digital filters. The DFT can be used to both analyze and design digital filters. One of its principal applications is the analysis of a filter’s impulse response. An impulse response database can be directly generated by presenting a one-sample unit pulse to a digital filter that is initially at a zero state (i.e., zero initial conditions). The output is the filter’s impulse response, which is observed for N contiguous samples. The N-sample database is then presented to an N-point DFT, transformed, and analyzed. The spectrum produced by the DFT should be a reasonable facsimile of the frequency response of the digital filter under test.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.67

DIGITAL FILTERS* DIGITAL FILTERS

10.67

FILTER REPRESENTATION A transfer function is defined by the ratio of output and input transforms. For digital filters, it is given by H(z) = Y(z)/U(z) where U(z) is the z transform of the input signal u(n) and Y(z) is for the output signal y(n). The frequency response of a filter H(z) can be computed using a DFT of the filter’s impulse response. Another transform tool that also is extensively used to study digital filters is the bilinear z transform. While the standard z transform can be related to the simple sample and hold circuit, the bilinear z transform is analogous to a first-order hold. The bilinear z transform is related to the familiar Laplace transform through s=

2( z − 1) ts ( z + 1)

z=

(2/ts ) + s (2/ts ) − s

(2)

Once an analog filter H(s) is defined, it can be converted into a discrete filter H(z) by using the variable substitution rule.

FINITE IMPULSE-RESPONSE (FIR) FILTERS Linear constant coefficient filters can be categorized into two broad classes known as finite impulse-response (FIR) or infinite impulse-response (IIR) filters. An FIR filter can be expressed in terms of a simple discrete equation: y(n) = c0x(n) + c1x(n – 1) + … + cN – 1x(n – N + 1)

(3)

where the coefficients {Ci} are called filter tap weights. In terms of a transfer function, Eq. (3) can be restated as n −1

H ( z ) = ∑ Ci z −1 i=0

(4)

As an example, a typical N = 111th-order FIR is shown in Fig. 10.6.1. The FIR exhibits several interesting features: 1. The filter’s impulse response exists for only N = 111 (finite) contiguous samples. 2. The filter’s transform function consists of zeros only (i.e., no poles). As a result, an FIR is sometimes referred to as an all-zero, or transversal, filter. 3. The filter has a very simple design consisting of a set of word-wide shift registers, tap-weight multipliers, and adders (accumulators). 4. If the input is bounded by united (i.e., |x(i)| ≤ 1 for all i), the maximum value of the output y(i) is Σ|Ci|. If all the tap weights Ci are bounded, the filter’s output is likewise bounded and, as a result, stability is guaranteed. 5. The phase, when plotted with respect to frequency (plot shown over the principal angles ± π/2), is linear with constant slope.

LINEAR PHASE BEHAVIOR The FIR is basically a shift-register network. Since digital shift registers are precise and easily controlled, the FIR can offer the designer several phase domain attributes that are difficult to achieve with analog filters. The most important of these are: (1) Potential for linear phase versus frequency behavior and (2) potential for constant

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.68

FIGURE 10.6.1 Typical FIR architecture, impulse response, and frequency response.

DIGITAL FILTERS*

10.68 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.69

DIGITAL FILTERS* DIGITAL FILTERS

10.69

group-delay behavior. These properties are fundamentally important in the fields of digital communications systems, phase synchronization systems (e.g., phase-locked loops), speech processing, image processing, spectral analysis (e.g., Fourier analysis), and other areas where nonlinear phase distortion cannot be tolerated.

FIR DESIGN METHODS The design of an FIR entails specifying the filter’s impulse response, tap weights {Ci}. As a result, the design of an FIR can be as simple as prespecifying the desired impulse response. Other acceptable analytical techniques used to synthesize a desired impulse response are the inverse Fourier transform of a given frequency domain filter specification or the use of polynomial approximation techniques. These methods are summarized below. A simple procedure for designing an FIR is to specify an acceptable frequency domain model, invert the filter’s spectral representation using the inverse Fourier transform, and use the resulting time series to represent the filter’s impulse response. In general, the inverse Fourier transform of a desired spectral waveshape would produce an infinitely long time domain record. However, from a hardware cost or throughput standpoint, it is unreasonable to consider the implementing of an infinitely or extremely long FIR. Therefore, a realizable FIR would be defined in terms of a truncated Fourier series. For example, the Fourier transform of the “nearly ideal” N = 101 order low-pass filter has a sin (x)/x type impulse-response envelope. For a large value of N, the difference between the response of an infinitely long impulse response and its N-sample approximation is small; however, when N is small, large approximation errors can occur.

Optimal Modeling Techniques Weighted Chebyshev polynomials have been successfully used to design FIRs. In this application, Chebyshev polynomials are combined so that their combined sum minimizes the maximum difference between an ideal and the realized frequency response (i.e., mini-max principle). Because of the nature of these polynomials, they produce a “rippled” magnitude frequency-response envelope of equal minima and maxima in the pass- and stopbands. As a result, this class of filters is often called an equiripple filter. Much is known about the synthesis process, which can be traced back to McClellan et al.48 Based on these techniques, a number of software-based CAD tools have been developed to support FIR design.

WINDOWS Digital filters usually are expected to operate over long, constantly changing data records. An FIR, while being capable of offering this service, can only work with a limited number of samples at a time. A similar situation presents itself in the context of a discrete Fourier transform. The quality of the produced spectrum is a function of the number of transformed samples. Ideally, an infinitely long impulse response would be defined by an ideal filter. A uniform window of length T will pass N contiguous samples of data. The windowing effect may be modeled as a multiplicative switch that multiplies the presented signal by zero (open) for all time exclusive of the interval [0, T]. Over [0, T], the signal is multiplied by unity (closed). In a sampled system, the interval [0, T] is replaced by N samples taken at a sample rate fs where T = N/fs. When the observation interval (i.e., N) becomes small, the quality of the spectral estimate begins to deteriorate. This consequence is called the finite aperture effect. Windowing is a technique that tends to improve the quality of a spectrum obtained from a limited number of samples. Some of the more popular windows found in contemporary use are the rectangular or uniform window, the Hamming window, the Hann window, the Blackman window, and the Kaiser window. Windows can be directly applied to FIRs. To window an N-point FIR, simply multiply the tap weight coefficients Ci with the corresponding window weights wi. Note that all of the standard window functions have even symmetry about the midsample. As a result, the application of such a window will not disturb the linear phase behavior of the original FIR.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.70

DIGITAL FILTERS* 10.70

FILTERS AND ATTENUATORS

MULTIRATE SIGNAL PROCESSING Digital signal processing systems accept an input time series and produce an output time series. In between, a signal can be modified in terms of its time and/or frequency domain attributes. One of the important functions that a digital signal processing system can serve is that of sample rate conversion. As the name implies, a sample rate converter changes a system’s sample rate from a value of fin samples per second to a rate of fout samples per second. Such devices are also called multirate systems since they are defined in terms of two or more sample rates. If fin > fout then the system is said to perform decimation and is said to be decimated by an integer M if M=

fout fin

(5)

In this case, the decimated time series xd[n] = x[Mn], or every Mth sample of the original time series is retained. Furthermore, the effective sample rate is reduced from fin to fdec = fin/M samples per second. Applications of decimation include audio and image signal processing involving two or more subsystems having dissimilar sample rates. Other applications occur when a high data rate ADC is placed at the front end of a system and the output is to be processed parameters that are sampled at a very low rate by a general-purpose digital computer. At other times, multirate systems are used simply to reduce the Nyquist rate to facilitate computational intensive algorithms, such as a digital Fourier analyzer, to be performed at a slower arithmetic rate. Another class of applications involves processing signals, sampled at a high data rate, through a limited bandwidth channel.

QUADRATURE MIRROR FILTERS (QMF) We have stated that multirate systems are often used to reduce the sample rate to a value that can be passed through a band-limited communication channel. Supposedly, the signal can be reconstructed on the receiver side. The amount of allowable decimation has been established by the Nyquist sampling theorem. When the bandwidth of the signal establishes a Nyquist frequency that exceeds the bandwidth of a communication channel, the signal must be decomposed into subbands that can be individually transmitted across band-limited channels. This technique uses a bank of band-limited filters to break the signal down into a collection of subbands that fit within the available channel bandwidths. Quadrature mirror filters (QMF) are often used in the subband application described in Fig. 10.6.2. The basic architecture shown in that figure defines a QMF system and establishes two input-output paths that have a bandwidth requirement that is half the input or output requirements. Using this technique, the channels can be subdivided over and over, reducing the bandwidth by a factor of 2 each time. The top path consists of lowpass filters and the bottom path is formed by high-pass filters.

FIGURE 10.6.2 Quadrature mirror filter (QMF).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.71

DIGITAL FILTERS* DIGITAL FILTERS

10.71

Designing QMF is, unfortunately, not a trivial process. No meaningful flat response linear phase QMF filter exists. Most QMF designs represent some compromise.

INFINITE IMPULSE-RESPONSE FILTER The FIR filter exhibits superb linear phase behavior; however, in order to achieve a high-quality (steep-skirt) magnitude frequency response, a high-order FIR is required. Compared to the FIR, the IIR filter 1. Generally satisfies a given magnitude frequency-response design objective with a lower-order filter. 2. Does not generally exhibit linear phase or constant group-delay behavior. If the principal objective of the digital filter design is to satisfy the prespecified magnitude frequency response, an IIR is usually the design of choice. Since the order of an IIR is usually significantly less than that of an FIR, the IIR would require fewer coefficients. This translates into a reduced multiplication budget and an attendant saving in hardware and cost. Since multiplication is time consuming, a reduced multiplication budget also translates into potentially higher sample rates. From a practical viewpoint, a realizable filter must produce bounded outputs if stimulated by bounded inputs. The magnitude is bounded on an IIR’s impulse response, namely, ∞

∑ | h(n) | < M n=0

(6)

If M is finite (bounded), the filter is stable, and if it is infinite (unbounded), the filter is unstable. This condition can also be more conveniently related to the pole locations of the filter under study. It is well known that a causal discrete system with a rational transfer function H(z) is stable (i.e., bounded inputs produce bounded outputs) if and only if its poles are interior to the unit circle in the z domain. This is often referred to as the circle criterion and it can be tested using general-purpose computer root-finding methods. Other algebraic tests—Schur-Cohen, Routh-Hurwitz, and Nyquist—may also be used. The stability condition is implicit to the FIR as long as all N coefficients are finite. Here the finite sum of real bounded coefficients will always be bounded.

DESIGN OBJECTIVES The design of an IIR begins with a magnitude frequency-response specification of the target filter. The filter’s magnitude frequency response is specified as it would be for an analog filter design. In particular, assume that a filter with a magnitude-squared frequency response given by |H(w)|2, having passband, transition band, and stopband behavior as suggested by Fig. 10.6.3, is to be synthesized. The frequency response of the desired filter is specified in terms of a cutoff critical frequency wp a stopband critical frequency ws and stop- and passband delimiters  and A. The critical frequencies wp and wa represent the end of the passband and the start of the stopband, respectively, for the low-pass example. In decibels, the gains at these critical frequencies are given by (passband ripple constraint) and –Aa = –10 log (A2) (stopband attenuation). For the case where  = 1, the common 3-dB passband filter is realized.

FIR AND IIR FILTERS COMPARED The principal attributes of FIR are its simplicity, phase linearity, and ability to decimate a signal. The strength of the IIR is its ability to achieve high-quality (steep-skirt) filtering with a limited number order design. Those positive characteristics of the FIR are absent in the IIR and vice versa.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.72

DIGITAL FILTERS* 10.72

FILTERS AND ATTENUATORS

FIGURE 10.6.3 Typical design objective for low-pass, high-pass, and bandstop IIR filters.

The estimated order of an FIR, required to achieve an IIR design specification, was empirically determined by Rabiner. It was found that the approximate order n of an FIR required to meet a design objective 1 δ1 (passband-ripple) 1+ ε2 1 δ 22 = 2 δ 2 (stopband bound) A ∆f (transition frequency range/fs )

(7)

−10 log((1 − δ1 )δ 2 ) − 15 +1 14 ∆f

(8)

(1 − δ1 )2 =

is given by n∼

STANDARD FILTER FORMS The standard filter forms found in common use are: (1) Direct II, (2) Standard, (3) Cascade, and (4) Parallel. These basic filter cases are graphically interpreted in Ref. 52. The direct II and standard architectures are somewhat similar in their structure. Both strategies possess information feedback paths ranging from one

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.73

DIGITAL FILTERS* DIGITAL FILTERS

10.73

delay to n delays. The transfer function denominator is an nth-order polynomial. The cascade and parallel models are constructed using a system of low-order subsections or subfilters. In the cascade design, the loworder subsections are serially interconnected. In the parallel filter, these sections are simply connected in parallel. The low-order subfilters, in both cases, are the result of factoring the nth-order transfer function polynomial into lower-order polynomials. The design and analysis of all four classes of filters can be performed using manually manipulated equations or a digital computer. The most efficient method of formulating the filter design problem, whether using tables, calculators, or a computer, is called the state-variable technique. A state variable is a parameter that represents the information stored in a system. The set of state variables is called a state vector. For an analog system, information is stored on capacitors or in inductors. In earlier chapters, state variables were used to specify and facilitate the manipulation of the R, L, and C components of an analog filter. In these cases, capacitive voltage and inductive current were valid state variables. Since resistors have no memory, they would not be the source of a state variable. In digital filters, the memory element, which stores the state information, is simply a delay (shift) register. The realization of digital filters is described in Ref. 52.

FIXED-POINT DESIGN An IIR, once designed and architected, often needs to be implemented in hardware. The choices are fixed- or floating-point. Of the two, fixed-point solutions generally provide the highest real-time bandwidth at the lowest cost. Unfortunately, fixed-point designs also introduce errors that are not found in more expensive floatingpoint IIR designs. The fixed-point error sources are either low-order inaccuracies, because of finite precision arithmetic and data (coefficient) roundoff effects or potentially large errors because of run-time dynamic range overflow (saturation). Additional precision can be gained by increasing the number of fractional bits assigned to the data and coefficients fields with an attendant decrease in dynamic range and an increased potential for runtime overflow. On the other hand, the overflow saturation problem can be reduced by enlarging the dynamic range of the system by increasing the integer bit field with an accompanying loss of precision. The problem facing the fixed-point filter design, therefore, is achieving a balance between the competing desire to maximize precision and to simultaneously eliminate (or reduce) run-time overflow errors. This is called the binary-point assignment problem.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.74

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 10.7

ATTENUATORS Arthur B. Williams, Fred, J. Taylor

ATTENUATOR NETWORK DESIGN Attenuators are passive circuits that introduce a fixed power loss between a source and load while matching impedances. The power loss is independent of the direction of power flow. Figure 10.7.1 shows T, Π, and bridged-T networks. The first two are unbalanced and unsymmetrical, unless ZL = ZS. In this case, Z1 = Z2, and the network is symmetrical. To build an unbalanced network, divide Z1 and Z2 by 2, and put half of each element in each series arm. The bridged-T shown is only for symmetrical networks. These design equations are valid for resistive and complex impedances. ZS = source impedance ZL = load impedance A = ratio of available power to desired load power (dB) = 10B/10 B = 10 log A

(1)

q = 1/2 ln A = 1/2 ln 10B/10 As an example, design an attenuator to match a 75-Ω source to a 300-Ω load and to introduce a 14.0-dB loss. Use a T section. In terms of Eq. (1), Zs = 75 Ω q = 1.612

ZL = 300 Ω Z3 = 62.34 Ω

B = 14.0 dB Z1 = 18.88 Ω

A = 25.12 Z2 = 262.54 Ω

Figure 10.7.2 shows the network.

10.74 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_10.qxd

10/28/04

10:51 AM

Page 10.75

ATTENUATORS

ATTENUATORS

FIGURE 10.7.1 Attenuator networks and equations.

FIGURE 10.7.2 A 14.0-dB attenuator between a 75-Ω source and a 300-Ω load.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

10.75

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 11

AMPLIFIERS AND OSCILLATORS Amplifiers serve a number of purposes from allowing us to hear beautiful music to accurately positioning elements of complicated systems using control technologies. Oscillators are found in a number of applications from the watch on your wrist to the transmitter and receiver in your cell phone. We look at audio-frequency amplifiers and oscillators and radio-frequency amplifiers and oscillators. The most versatile amplifier has to be the operational amplifier (op amp). The key to its success is that it is perhaps the most ideal device in analog electronics. Because of this it is found in a number of amplifier designs. High-power amplifiers are necessary where significant amounts of power need to be used to accomplish activities such as radio and television broadcasts. Just imagine what a rock concert might sound like without power amplifiers. Microwave amplifiers and oscillators represent a special part of the high-power amplifier and oscillator field. C.A.

In This Section: CHAPTER 11.1 AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIERS: PRINCIPLES OF OPERATION OSCILLATORS: PRINCIPLES OF OPERATION

11.5 11.5 11.14

CHAPTER 11.2 AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AUDIO OSCILLATORS

11.18 11.18 11.25

CHAPTER 11.3 RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS RADIO-FREQUENCY OSCILLATORS BROADBAND AMPLIFIERS TUNNEL-DIODE AMPLIFIERS PARAMETRIC AMPLIFIERS MASER AMPLIFIERS ACOUSTIC AMPLIFIERS MAGNETIC AMPLIFIERS

11.35 11.35 11.43 11.47 11.60 11.64 11.66 11.70 11.79

CHAPTER 11.4 OPERATIONAL AMPLIFIERS DIRECT-COUPLED AMPLIFIERS OPERATIONAL AMPLIFIERS FOR ANALOG ARITHMETIC LOW-NOISE OPERATIONAL AMPLIFIERS POWER OPERATIONAL AMPLIFIERS

11.87 11.87 11.93 11.97 11.99

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.2

AMPLIFIERS AND OSCILLATORS

CHAPTER 11.5 HIGH-POWER AMPLIFIERS THERMAL CONSIDERATIONS HIGH-POWER BROADCAST-SERVICE AMPLIFIERS CLASS B LINEAR RF AMPLIFIERS HIGH-EFFICIENCY POWER AMPLIFIERS INDUCTION HEATING CIRCUITS DIELECTRIC HEATING TRANSISTORS IN HIGH-POWER AMPLIFIERS MOSFET AUDIO AMPLIFIERS AND SWITCHING APPLICATIONS

11.102 11.102 11.102 11.102 11.103 11.103 11.103 11.104 11.105

CHAPTER 11.6 MICROWAVE AMPLIFIERS AND OSCILLATORS MICROWAVE SOLID-STATE DEVICES IMPATT DIODE CIRCUITS TRAPATT DIODE CIRCUITS BARITT AND DOVETT DIODES TRANSFERRED ELECTRON EFFECT DEVICE (TED) CIRCUITS TRANSISTOR AMPLIFIER AND OSCILLATOR MICROWAVE CIRCUITS NOISE PERFORMANCE OF MICROWAVE BIPOLAR TRANSISTOR CIRCUITS HIGH-POWER MICROWAVE TRANSISTOR AMPLIFIERS (USING BIPOLAR TRANSISTORS) GaAs FIELD-EFFECT TRANSISTOR CIRCUITS NOISE PERFORMANCE OF MICROWAVE FET CIRCUITS HIGH ELECTRON MOBILITY TRANSISTORS HIGH-POWER MICROWAVE FET AMPLIFIERS MONOLITHIC MICROWAVE INTEGRATED CIRCUITS TRANSISTOR OSCILLATORS TRAVELING-WAVE-TUBE CIRCUITS KLYSTRON OSCILLATORS AND AMPLIFIERS CROSSED-FIELD-TUBE CIRCUITS GYROTRON CIRCUITS

11.107 11.107 11.107 11.110 11.112 11.113 11.115 11.118 11.119 11.122 11.122 11.122 11.122 11.123 11.123 11.124 11.126 11.127 11.128

Section Bibliography: Classic General References Bode, H. W., “Network Analysis and Feedback Amplifier Design,” Van Nostrand, 1959. (Reprinted 1975 by R. E. Krieger). Ghausi, M. S., and D. O. Pederson, “A new approach to feedback amplifiers,” IRE Trans. Circuit Theory, Vol. CT-4, September 1957. Ginzton, E. L., W. R. Hewlett, J. H. Jasberg, and J. D. Noe, “Distributed amplification,” Proc. IRE, Vol. 20, August 1948. Glasford, G. M., “Fundamentals of Television Engineering,” McGraw-Hill, 1955. Hines, M. E., “High-frequency negative-resistance circuit principles for Esaki diodes,” Bell Syst. Tech. J., Vol. 39, May 1960. Hutson, A. R., J. H. McFee, and D. L. White, “Ultrasonic amplification in CdS,” Phys. Rev. Lett., September 15, 1961. Kim, C. S., and A. Brandli, “High frequency high power operation of tunnel diodes,” IRE Trans. Circuit Theory, December 1962. Millman, J., “Vacuum Tube and Semiconductor Electronics,” McGraw-Hill, 1958. Read, W. T., “A proposal high-frequency negative resistance diode,” Bell Syst. Tech. J., Vol. 37, 1958. Reich, H. J., “Functional Circuits and Oscillators,” Van Nostrand, 1961. Seely, S., “Electron Tube Circuits,” McGraw-Hill, 1950. Shea, R. F. (ed.), “Amplifier Handbook,” McGraw-Hill, 1968. Singer, J. R., “Masers,” Wiley, 1959. Storm, H. F., “Magnetic Amplifiers,” Wiley, 1955. Truxal, J. C., “Automatic Feedback Control System Synthesis,” McGraw-Hill, 1955.

11.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.3

AMPLIFIERS AND OSCILLATORS

Specific-Topic and Contemporary References Bahl, I. (ed.), “Microwave Solid State Circuit Design,” Wiley, 1988. Wilson, F. A., “An Introduction to Microwaves,” Babani, 1992. Blackwell, L. A., and K. L. Kotzebue, “Semiconductor-Diode Parametric Amplifiers,” Prentice Hall, 1961. Blotekjaer, K., and C. F. Quate, “The coupled modes of acoustic waves and drifting carriers in piezoelectric crystals,” Proc. IEEE, Vol. 52, No. 4, pp. 360–377, April 1965. Cate, T., “Modern techniques of analog multiplication,” Electron. Eng., pp. 75–79, April 1970. Chang, K. K. N., “Parametric and Tunnel Diodes,” Prentice Hall, 1964. Coldren, L. A., and G. S. Kino, “Monolithic acoustic surface-wave amplifier,” Appl. Phys. Lett., Vol. 18, No. 8, p. 317, 1971. Cunningham, D. R., and J. A. Stiller, “Basic Circuit Analysis,” Houghton Mifflin, 1991. Curtis, F. W., “High-Frequency Induction Heating,” McGraw-Hill, 1964. Datta, S., “Surface Acoustic Wave Devices,” Prentice Hall, Vol. 72, 1986. Duenas, J. A., and A. Serrano, “Directional coupler design graphs for parallel coupled lines and interdigitated 3 dB couplers,” RF Design, pp. 62–64, February 1986. Evaluating, Selecting, and Using Multiplier Circuit Modules for Signal Manipulation and Function Generation, Analog Devices, 1970. Garmand, P. A., “Complete small size 2 to 30 GHz hybrid distributed amplifier using a novel design technique,” IEEE MITT-S Digest, pp. 343–346, 1986. Hagt, W. H., Jr., and J. E. Kemmerly, “Engineering Circuit Analysis,” 5th ed., McGraw-Hill, 1993. Helms, H. L., “Contemporary Electronics Circuit Deskbook,” McGraw-Hill, 1986. Helszajn, J., “Microwave Planar Passive Circuits and Filters,” Wiley, 1994. Ingebritsen, K. A., “Linear and nonlinear attenuation of acoustic surface waves in a piezoelectric coated with a semiconducting film,” J. Appl. Phys., Vol. 41, p. 454, 1970. Inglis, A. F., “Video Engineering,” McGraw-Hill, 1993. Kino, G. S., and T. M. Reeder, “A normal mode theory for the Rayleigh wave amplifier,” IEEE Trans. Electron Devices, Vol. ED-18, p. 909, 1971. Kotelyanski, I. M., A. I. Kribunov, A. V. Edved, R. A. Mishkinis, and V. V. Panteleev, “Fabrication of LiNbO3-InSb layer structures and their use in amplification of surface acoustic waves,” Sov. Phys. Semicon, Vol. 12, No. 7, pp. 751–754, July 1978. Kouril, F., “Non-linear and Parametric Circuits: Principles, Theory, and Applications,” Chichester/Wiley, 1988. Ladbrooke, P. H., “MMIC Design: GaAs FETs and HEMTs,” Artech House, 1989. Lakin, K. M., and H. J. Shaw, “Surface wave delay line amplifiers, IEEE Trans. Microwave Theory Techniques, MTT-17, p. 912, 1969. Lange, L., “Interdigitated stripline quadrature hybrid,” IEEE Trans. MTT, December 1969. Liff, A. A., “Color and Black and White Television,” Regents/Prentice Hall, 1993. Lin, Y., “Ion Beam Sputtered InSb Thin Films and Their Application to Surface Acoustic Wave Amplifiers,” Ph.D. Dissertation, Polytechnic University of 1995. May, J. E., Jr., “Electronic signal amplification in the UHF range with the ultrasonic traveling wave amplifier,” Proc. IEEE, Vol. 53, No. 10, October 1965. McFee, J. H., “Transmission and amplification of acoustic waves,” in: Physical Acoustics, Vol. 4A, Academic Press, 1964. Middlebrook, R. D., “Differential Amplifiers,” Wiley, 1963. Mizuta, H., “The Physics and Applications of Resonant Tunnelling Diodes,” Cambridge University Press, 1995. Nelson, J. C. C., “Operational Amplifier Circuits: Analysis and Design,” Butterworth-Heinemann, 1995. Optical Pumping and Masers, Appl. Opt., Vol. 1, No. 1, January 1962. Pauley, R. G., P. G. Asher, J. M. Schellenberg, and H. Yamasaki, “A 2 to 40 GHz monolithic distributed amplifier,” GaAs IC Symp., pp. 15–17, November 1985. Penfield, P. Jr., and R. P. Rafuse, “Varactor Applications,” The MIT Press, 1962. Petruzzela, F. D., “Industrial Electronics,” Glencoe/McGraw-Hill, 1996. “Power Op-amp Handbook,” Apex Microtechnology, 85741, 1987. Pucel, R. A., “Monolithic Microwave Integrated Circuits,” IEEE Press, 1985.

11.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.4

AMPLIFIERS AND OSCILLATORS

Robertson, I. D., “MMIC Design,” IEE, London, 1995. Rutkowski, G. B., “Operational Amplifiers: Integrated and Hybrid Circuits,” Wiley, 1993. Simpson, C. D., “Industrial Electronics,” Prentice Hall, 1996. Southgate, P. D., and H. N. Spector, “Effect of carrier trapping on the Weinreich relation in acoustic amplification,” J. Appl. Phys., vol. 36, pp. 3728–3730, December 1965. Tehon, S. W., “Acoustic wave amplifiers,” Chap. 30, Amplifier Handbook, McGraw-Hill, 1968. Tobey, G. E., L. P. Huelsman, and J. G. Graeme, “Operational Amplifiers,” McGraw-Hill, 1971. Traister, R. J., “Operational Amplifier Circuit Manual,” Academic Press, 1989. Vizmuller, P., “RF Design Guide: Systems, Circuits and Equations,” Artech House, 1995. Wang, W. C., “Strong electroacoustic effect in CdS,” Phys. Rev. Lett., Vol. 9, No. 11, pp. 443–445, December 1, 1962. Wang, W. C., and Y. Lin, “Acousto-electric Attenuation Determined by Transmission Line Technique,” International Workshop on Ultrasonic Application, September 1–3, 1996. Wanuga, S., “CW acoustic amplifier, Proc. IEEE (Corres.), Vol. 53, No. 5, p. 555, May 1965. White, D. L., Amplification of ultrasonic waves in piezoelectric semiconductors,” J. Appl. Phys., Vol. 33, No. 8, pp. 2547–2554, August 1962. White, R. M., “Surface elastic-wave propagation and amplification,” IEEE Trans. Electron Devices, ED-14, 181 (1967). Wilkinson, E. J., “An N-way hybrid power divider,” IRE MTT, January 1960. Wilson, T. G., “Series connected magnetic amplifier with inductive loading,” Trans. AIEE, Vol. 71, 1952.

11.4 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.5

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 11.1

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION G. Burton Harrold

AMPLIFIERS: PRINCIPLES OF OPERATION Gain In most amplifier applications the prime concern is gain. A generalized amplifier is shown in Fig. 11.1.1. The most widely applied definitions of gain using the quantities defined there are: Voltage gain Aυ = e22 / e11 Available power from source Pavs = Output load power PL =

|e22 |2 Re Z L

Available power at output Pavo =

|es |2 4 Re Z s

where Re = real part of complex impedance

Input power PI = | e22 |2 4 Re Z out

Available power gain G A = Pavo / Pavs Insertion power gain GI =

Current gain Ai = i2 / i1

|e11 |2 Re Z in

Transducer gain GT = PL / Pavs

Power gain G = PL / PI

power into load with network inserted power into load with sourrce connected to load

Bandwidth and Gain-Bandwidth Product Bandwidth is a measure of the range of frequencies within which an amplifier will respond. The frequency range (passband) is usually measured between the half-power (3-dB) points on the output-responseversus-frequency curve, for constant input. In some cases it is defined at the quarter-power points (6 dB). See Fig. 11.1.2. The gain-bandwidth product of a device is a commonly used figure of merit. It is defined for a bandpass amplifier as Fa = Ar B 11.5 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.6

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION 11.6

AMPLIFIERS AND OSCILLATORS

where Fa = figure of merit (rad/s) Ar = reference gain, either the maximum gain or the gain at the frequency where the gain is purely real or purely imaginary B = 3-dB bandwidth (rad/s) For low-pass amplifiers FIGURE 11.1.1 Input and output quantities of generalized amplifier.

Fa = ArWH where Fa = figure of merit (rad/s) Ar = reference gain WH = upper cutoff frequency (rad/s)

In the case of vacuum tubes and certain other active devices this definition is reduced to Fa = gm / CT where Fa = figure of merit (rad/s) gm = transconductance of active device CT = total output capacitance, plus input capacitance of subsequent stage Noise The major types of noise are illustrated in Fig. 11.1.3. Important relations and definitions in noise computations are: Noise factor

F=

Si / N i So / N o

where Si = signal power available at input So = signal power available at output Ni = noise power available at input at T = 290 K No = noise power available at output Available noise power

Pn ,av =

en2 = KTB for thermal noise 4R

where the quantities are as defined in Fig. 11.1.3. Excess noise factor

F − 1 = Ne / Ni

FIGURE 11.1.2 Amplifier response and bandwidth.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.7

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

11.7

en2 = mean-square open-circuit noise voltage from a resistor R K = 1.38 × 10 −23 J/K T = temperature, K B = bandwith, Hz R = resistance, Ω

in2 = mean-square short-circuit noise current e = 1.6 × 10 −19 C I = dc current amps through R R = reesistance, Ω B = bandwith, Hz

inf2 = mean-square short-circuit flicker noisee current R = resistance, Ω I = dc current f = frequency, Hz ∆ f = frequency interval k ,α , n = empirical constants depending on device and mode off operation FIGURE 11.1.3 Noise-equivalent circuits.

where F − 1 = excess noise factor Ne = total equivalent device noise referred to input Ni = thermal noise of source at standard temperature Noise temperature

T = Pn ,av / KB

where Pn,av is the average noise power available. At a single input-output frequency in a two-port, Effective input noise temperature

Te = 290( F − 1)

Noise Factor of Transmission Lines and Attenuators. The noise factor of two ports composed entirely of resistive elements at room temperature (290 K) and an impedance matched loss of L = 1/GA is F = L. Cascaded noise factor

FT = F1 + ( F2 − 1)/ G A

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.8

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION 11.8

AMPLIFIERS AND OSCILLATORS

where FT = overall noise factor F1 = noise factor of first stage F2 = noise factor of second stage GA = available gain of first stage System Noise Temperature. Space probes and satellite communication systems using low-noise amplifiers and antennas directed toward outer space make use of system noise temperatures. When we define TA = antenna temperature, L = waveguide numeric loss (greater than 1), TE1 = amplifier noise temperature, GA = amplifier available gain, F = postamplifier noise factor, and B = postamplifier bandwidth, this temperature can be calculated as Tsys = TA + | L − 1| 290° + LTE1 +

( F − 1)(290 L ) G A1

The quantity of interest is the output signal-to-noise ratio where SA is available signal power at the antenna (assuming the antenna is matched to free space) S/N = S A / KTsys B K = 1.38 × 10 −23 Generalized Noise Factor. A general representation of noise performances can be expressed in terms of Fig. 11.1.4. This is the representation of a noisy two-port in terms of external voltage and current noise sources with a correlation admittance. In this case the noise factor becomes F = 1+

Gu RN − [(Gs + Gγ )2 + ( Bs + Bγ )2 ] Gs Gs

where F = noise factor Gs = real part of Ys Bs = imaginary part of Ys Gu = conductance owing to the uncorrelated part of the noise current Yg = correlation admittance between cross product of current and voltage noise sources Gg = real part of Yg Bg = imaginary part of Yg RN = equivalent noise resistance of the noise voltage The optimum source admittance is Yopt = Gopt + jBopt

Gopt

 G + R G2  u N γ  =   RN

1/ 2

FIGURE 11.1.4 Noise representation using correlation admittance.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.9

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

11.9

where Bopt = −Bgg and the value of the optimum noise factor Fopt is Fopt = 1 + 2 RN (Gγ + G0 ) The noise factor for an arbitrary source impedance is F = Fopt +

RN [(Gs − G0 )2 + ( Bs − B0 )2 ] Gs

The values of the parameters of Fig. 11.1.4 can be determined by measurement of (1) noise figure versus Bs with Gs constant and (2) noise figure versus Gs with Bs at its optimum value. Dynamic Characteristic, Load Lines, and Class of Operation Most active devices have two considerations involved in their operation. The first is the dc bias condition that establishes the operating point (the quiescent point). The choice of operating point is determined by such considerations as signal level, uniformity of the device, and temperature of operation. The second consideration is the ac operating performance, related to the slope of the dc characteristic and to the parasitic reactances of the device. These ac variations give rise to the small-signal parameters. The ac parameters may also influence the choice of dc bias point when basic constraints, such as gain and noise performance, are considered. For frequencies of operation where these parasites are not significant, the use of a load line is valuable. The class of amplifier operation is dependent on its quiescent point, its load line, and input signal level. The types of operation are shown in Fig. 11.1.5. Distortion Distortion takes many forms, most of them undesirable. The basic causes of distortion are nonlinearity in amplitude response and nonuniformity of phase response. The most commonly encountered types of distortion are as follows: Harmonic distortion is a result of nonlinearity in the amplitude transfer characteristics. The typical output contains not only the fundamental frequency but integer multiples of it. Crossover distortion is a result of the nonlinear characteristics of a device when changing operating modes (e.g., in a push-pull amplifier). It occurs when one device is cut off and the second turned on if the crossover is not smooth between the two modes. Intermodulation distortion is a spurious output resulting from the mixing of two or more signals of different frequencies. The spurious output occurs at the sum or difference of integer multiples of the original frequencies. Cross-modulation distortion occurs when two signals pass through an amplifier and the modulation of one is transferred to the other. Phase distortion results from the deviation from a constant slope of the output-phase–versus–frequency response of an amplifier. This deviation gives rise to echo responses in the output that precede and follow the main response, and a distortion of the output signal when an input signal having a large number of frequency components is applied. Feedback Amplifiers Feedback amplifiers fall into two categories: those having positive feedback (usually oscillators) and those having negative feedback. The positive-feedback case is discussed under oscillators. The following discussion is concerned with negative-feedback amplifiers.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.10

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION 11.10

AMPLIFIERS AND OSCILLATORS

FIGURE 11.1.5 Classes of amplifier operation. Class S operation is a switching mode in which a squarewave output is produced by a sine-wave input.

Negative Feedback A simple representation of a feedback network is shown in Fig. 11.1.6. The closed-loop gain is given by e2 / e1 = A/(1 − BA) where A is the forward gain with feedback removed and B is the fraction of output returned to input. For negative feedback, A provides a 180° phase shift in midband, so that FIGURE 11.1.6 Amplifier with feedback loop.

1 − AB > 1

in this frequency range

The quantity 1 − AB is called the feedback factor, and if the circuit is cut at any X point in Fig. 11.1.6, the openloop gain is AB. It can be shown that for large loop gain AB the closed-loop transfer function reduces to e2 / e1 ≈ 1/ B The gain then becomes essentially independent of variations in A. In particular, if B is passive, the closed-loop gain is controlled only by passive components. Feedback has no beneficial effect in reducing unwanted signals

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.11

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

11.11

at the input of the amplifier, e.g., input noise, but does reduce unwanted signals generated in the amplifier chain (e.g., output distortion). The return ratio can be found if the circuit is opened at any point X (Fig. 11.1.6) and a unit signal P is injected at that X point. The return signal P′ is equal to the return ratio, since the input P is unity. In this case the return ratio T is the same at any point X and is T = − AB The minus sign is chosen because the typical amplifier has an odd number of phase reversals and T is then a positive quantity. The return difference is by definition F =1+ T It has been shown by Bode that F = ∆ / ∆0 where ∆ is the network determinant with XX point connected and ∆0 is the network determinant of amplifier when gain of active device is set to zero. Stability The stability of the network can be analyzed by several techniques. Of prime interest are the Nyquist, Bode, Routh, and root-locus techniques of analyzing stability. Nyquist Method. The basic technique of Nyquist involves plotting T on a polar plot as shown in Fig. 11.1.7 for all values s = jw for w between minus and plus infinity. Stability is then determined by the following method: 1. Draw a vector from the −1 + j0 point to the plotted curve and observe the rotation of this vector as w varies from −∞ to +∞. Let R be the net number of counterclockwise revolutions of this vector. 2. Determine the number of roots of the denominator of T = −AB which have positive real parts. Call this number P. 3. The system is stable if and only if P = R. Note that in many systems A and B are stable by themselves, so that P becomes zero and the net counterclockwise revolution N becomes zero for stability. Bode’s Technique. A technique that has historically found wide use in determining stability and performance, especially in control systems, is the Bode diagram. The assumptions used here for this method are that T = –AB, where A and B are stable when the system is open-circuitFIGURE 11.1.7 Nyquist diagram for determining ed and consists of minimum-phase networks. It is also necstability. essary to define a phase margin g such that g = 180 + f, where f is the phase angle of T and is positive when measured counterclockwise from zero, and g, the phase margin, is positive when measured counterclockwise from the 180° line (Fig. 11.1.7). The stability criterion under these conditions reads: Systems having a positive phase margin when their return ratio equal to 20 log |T| goes through 0 dB (i.e., where |T| crosses the unit circle in the Nyquist plot) are stable; if a negative g exists at 0 dB, the system is unstable. Bode’s theorems show that the phase angle of a system is related to the attenuation or gain characteristic as a function of frequency. Bode’s technique relies heavily on straight-line approximation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.12

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION 11.12

AMPLIFIERS AND OSCILLATORS

FIGURE 11.1.8 Equivalent circuits of active devices: (a) vacuum tube; (b) bipolar transistor; (c) field-effect transistor (FET).

Routh’s Criterion for Stability. Routh’s method has also been used to test the characteristic equations or return difference F = 1 + T = 0, to determine whether it has any roots that are real and positive or complex with positive real parts that will give rise to growing exponential responses and hence instability. Root-Locus Method. The root-locus method of analysis is a means of finding the variations of the poles of a closed-loop response as some network parameter is varied. The most convenient and commonly used parameter is that of the gain K. The basic equation then used is F = 1 + KT (s) = 1 − K

( S − S2 )( S − S4 ) =0 ( S − S1 )( S − S3 )

This is a useful technique in feedback and control systems, but it has not found wide application in amplifier design. A detailed exposition of the technique is found in Truxal.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.13

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

11.13

FIGURE 11.1.9 Definitions of active-network parameters: (a) general network; (b) ratios ai and bi of incident and reflected waves (square root of power); (c) s parameters.

Active Devices Used in Amplifiers There are numerous ways of representing active devices and their properties. Several common equivalent circuits are shown in Fig. 11.1.8. Active devices are best analyzed in terms of the immittance or hybrid matrices. Figures 11.1.9 and 11.1.10 show the definition of the commonly used matrices, and their interconnections are shown in Fig. 11.1.11. The requirements at the bottom of Fig. 11.1.11 must be met before the interconnection of two matrices is allowed. The matrix that is becoming increasingly important at higher frequencies is the S matrix. Here the network is embedded in a transmission-line structure, and the incident and reflected powers are measured and reflected coefficients and transmission coefficients are defined.

Cascaded and Distributed Amplifiers Most amplifiers are cascaded (i.e., connected to a second amplifier). The two techniques commonly used are shown in Fig. 11.1.12. In the cascade structure the overall response is the product of the individual responses: in the distributed structure the response is one-half the sum of the individual responses, since each stage’s output is propagated in both directions. In cascaded amplifiers the frequency response and gain are determined by the active device as well as the interstage networks. In simple audio amplifiers these interstage networks may become simple RC combinations, while in rf amplifiers they may become critically coupled double-tuned circuits. Interstage coupling networks are discussed in subsequent sections.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.14

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION 11.14

AMPLIFIERS AND OSCILLATORS

FIGURE 11.1.10 Network matrix terms.

In distributed structures (Fig. 11.1.12b), actual transmission lines are used for the input to the amplifier, while the output is taken at one end of the upper transmission line. The propagation time along the input line must be the same as that along the output line, or distortion will result. This type of amplifier, noted for its wide frequency response, is discussed later.

OSCILLATORS: PRINCIPLES OF OPERATION Introduction An oscillator can be considered as a circuit that converts a dc input to a time-varying output. This discussion deals with oscillators whose output is sinusoidal, as opposed to the relaxation oscillator whose output exhibits abrupt transitions (see Section 14). Oscillators often have a circuit element that can be varied to produce different frequencies. An oscillator’s frequency is sensitive to the stability of the frequency-determining elements as well as the variation in the active-device parameters (e.g., effects of temperature, bias point, and aging). In many instances

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.15

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

11.15

FIGURE 11.1.11 Matrix equivalents of network interconnections.

the oscillator is followed by a second stage serving as a buffer, so that there is isolation between the oscillator and its load. The amplitude of the oscillation can be controlled by automatic gain control (AGC) circuits, but the nonlinearity of the active element usually determines the amplitude. Variations in bias, temperature, and component aging have a direct effect on amplitude stability. Requirements for Oscillation Oscillators can be considered from two viewpoints: as using positive feedback around an amplifier or as a one-port network in which the real component of the input immittance is negative. An oscillator must have frequency-determining elements (generally passive components), an amplitude-limiting mechanism, and sufficient closed-loop gain to make up for the losses in the circuit. It is possible to predict the operating frequency and conditions needed to produce oscillation from a Nyquist or Bode analysis. The prediction of output amplitude requires the use of nonlinear analysis. Oscillator Circuits Typical oscillator circuits applicable up to ultra high frequencies (UHF) are shown in Fig. 11.1.13. These are discussed in detail in the following subsections. Also of interest are crystal oscillators. In this case the crystal is used as the passive frequency-determining element. The frequency range of crystal oscillators extends from a few hundred hertz to over 200 MHz by use of overtone crystals. The analysis of crystal oscillators is best done using the equivalent circuit of the crystal.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.16

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

FIGURE 11.1.12 Multiamplifier structures: (a) cascade; (b) distributed.

FIGURE 11.1.13 Types of oscillators: (a) tuned-output; (b) Hartley; (c) phase-shift; (d) tuned-input; (e) Colpitts; ( f ) Wien bridge.

FIGURE 11.1.14 Phase-locked-loop oscillator.

11.16 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

4:47 PM

Page 11.17

AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION AMPLIFIER AND OSCILLATOR PRINCIPLES OF OPERATION

11.17

FIGURE 11.1.15 Injection-locked oscillator.

Synchronization Synchronization of oscillators is accomplished by using phase-locked loops or by direct low-level injection of a reference frequency into the main oscillator. The diagram of a phase-locked loop is shown in Fig. 11.1.14 and that of an injection-locked oscillator in Fig. 11.1.15.

Harmonic Content The harmonic content of the oscillator output is related to the amount of oscillator output power at frequencies other than the fundamental. From the viewpoint of a negative-conductance (resistance) oscillator, better results are obtained if the curve of the negative conductance (or resistance) versus amplitude of oscillation is smooth and without an inflection point over the operating range. Harmonic content is also reduced if the oscillator’s operating point Q is chosen so that the range of negative conductance is symmetrical about Q on the negative conductance-versus-amplitude curve. This can be done by adjusting the oscillator’s bias point within the requirement of |GC| = |GD| for sustained oscillation (see Fig. 11.1.16).

Stability The stability of the oscillator’s output amplitude and frequency from a negative-conductance viewpoint depends on the variation of its negative conductance with operating point and the amount of fixed positive conductance in the oscillator’s associated circuit. In particular, if the change of bias results in vertical translation of the conductance-(resistance)-versus-amplitude curve, the oscillator’s stability is related to the change of slope at the point where the circuit’s fixed conductance intersects this curve (point Q in Fig. 11.1.16). If the |GD| curve is of the shape of |GD|2, the oscillation can stop when a large enough change in bias point occurs for |GD| to be less than |GC| for all amplitudes of oscillation. Stabilization of the amplitude of oscillation may occur in the form of modifying GC, GD , or both to compensate for bias changes. Particular types of oscillators and their parameters are discussed later in this section.

FIGURE 11.1.16 Device conductance vs. amplitude of oscillation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.18

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 11.2

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS Samuel M. Korzekwa, Robert J. McFadyen

AUDIO-FREQUENCY AMPLIFIERS Samuel M. Korzekwa Preamplifiers General Considerations. The function of a preamplifier is to amplify a low-level signal to a higher level before further processing or transmission to another location. The required amplification is achieved by increased signal voltage and/or impedance reduction. The amount of power amplification required varies with the particular application. A general guideline is to provide sufficient preamplification to ensure that further signal handling adds minimal (or acceptable) signal-to-noise degradation. Signal-to-Noise Considerations. The design of a preamplifier must consider all potential signal degradation from sources of noise, whether generated externally or within the preamplifier itself. Examples of externally generated noise are hum and pickup, which may be introduced by the input-signal lines or the power-supply lines. Shielding of the input-signal lines often proves to be an acceptable solution. The preamplifier should be located close to the transmitting source, and the preamplifier power gain must be sufficient to override interference that remains after these steps are taken. A second major source of noise is that internally generated in the amplifier itself. The noise figure specified in decibels for a preamplifier, which serves as a figure of merit, is defined as the ratio of the available input-to-output signal-to-noise power ratios: F=

Si / N i So / N o

where F = noise figure of preamplifier Si = available signal input power Ni = available noise input power So = available signal output power No = available noise output power Design precautions to realize the lowest possible noise figure include the proper selection of the active device, optimum input and output impedance, correct voltage and current biasing conditions, and pertinent design parameters of devices.

11.18 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.19

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.19

Low-Level Amplifiers The low-level designation applies to amplifiers operated below maximum permissible power-dissipation, current, and voltage limits. Thus many low-level amplifiers are purposely designed to realize specific attributes other than delivering the maximum attainable power to the load, such as gain stability, bandwidth, optimum noise figure, and low cost. In an amplifier designed to be operated with a 24-V power supply and a specified load termination, for example, the operating conditions may be such that the active devices are just within their allowable limits. If operated at these maximum limits, this is not a low-level amplifier; however, if this amplifier also fulfills its performance requirements at a reduced power-supply voltage of 6 V, with resulting much lower internal dissipation levels, it becomes a low-level amplifier. Medium-Level and Power Amplifiers The medium-power designation for an amplifier implies that some active devices are operated near their maximum dissipation limits, and precautions must be taken to protect these devices. If power-handling capability is taken as the criterion, the 5- to 100-W power range is a current demarcation line. As higher-power-handling devices come into use, this range will tend to shift to higher power levels. The amount of power that can safely be handled by an amplifier is usually dictated by the dissipation limits of the active devices in the output stages, the efficiency of the circuit, and the means used to extract heat to maintain devices within their maximum permissible temperature limits. The classes of operation (A, B, AB, C) are discussed relative to Fig. 11.1.5. When single active devices do not suffice, multiple series or parallel configurations can be used to achieve higher voltage or power operation. Multistage Amplifiers An amplifier may take the form of a single stage or a complex single stage, or it may employ an interconnection of several steps. Various biasing, coupling, feedback, and other design alternatives influence the topology of the amplifier. For a multistage amplifier, the individual stages may be essentially identical or radically different. Feedback techniques may be used at the individual stage level, at the amplifier functional level, or both, to realize bias stabilization, gain stabilization, output-impedance reduction, and so forth. Typical Electron-Tube Amplifier Figure 11.2.1 shows a typical electron-tube amplifier stage. For clarity the signal-source and load sections are shown partitioned. For a multistage amplifier the source represents the equivalent signal generator of the preceding stage. Similarly, the load indicated includes the loading effect of the subsequent stage, if any. The voltage gain from the grid of the tube to the output can be calculated to be Aυ1 = −

µ R1 rp + Rl

Similarly, the voltage gain from the source to the tube grid is Aυ 2 =

R1 ( R1 + Rg ) + 1/ jωC

Combining the above equations gives the composite amplifier voltage gain Aυ =

µ R1Rl (rp + Rl )[( R1 + Rg ) + 1/ jωC ]

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.20

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.20

AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.1 Typical triode electron-tube amplifier stage (biasing not shown).

This example illustrates the fundamentals of an electron-tube amplifier stage. Many excellent references treat this subject in detail.

Typical Transistor Amplifier The analysis techniques used for electron-tube amplifier stages generally apply to transistorized amplifier stages. The principal difference is that different active-device models are used. The typical transistor stage shown in Fig. 11.2.2 illustrates a possible form of biasing and coupling. The source section is partitioned and includes the preceding-stage equivalent generator, and the load includes subsequent stage-loading effects. Figure 11.2.3 shows the generalized h-equivalent circuit representation for transistors. Table 11.2.1 lists the h-parameter transformations for the common-base, common-emitter, and common-collector configurations.

FIGURE 11.2.2 Typical bipolar transistor-amplifier stage.

FIGURE 11.2.3 Equivalent circuit of transistor, based on h parameters.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.21

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.21

TABLE 11.2.1 h Parameters of the Three Transistor Circuit Configurations Common-base h11 h12 h21 h22

hib hrb hf b hob

Common-emitter

Common-collector

hib(hfe + 1) hibhob(hfe + 1) − hrb hfe hob(hfe + 1)

hib(hfe + 1) 1 – (hfe + 1) hob(hfe + 1)

While these parameters are complex and frequencydependent, it is often feasible to use simplifications. Most transistors have their parameters specified by their manufacturers, but it may be necessary to determine additional parameters by test. Figure 11.2.4 illustrates a simplified model of the transistor amplifier stage of Fig. 11.2.2. The common-emitter h parameters are used to represent the equivalent transistor. The voltage gain for this stage is Aυ =

h fe Rl Vo =− Vi Rg + hie

FIGURE 11.2.4 Simplified equivalent circuit of transistor amplifier stage.

The complexity of analysis depends on the accuracy needed. Currently, most of the more complex analysis is performed with the aid of computers. Several transistor-amplifier-analysis references treat this subject in detail.

Typical Multistage Transistor Amplifier Figure 11.2.5 is an example of a capacitively coupled three-stage transistor amplifier. It has a broad frequency response, illustrating the fact that an audio amplifier can be useful in other applications. The component values are R1 = 16, 000 Ω

R2 = 6200 Ω

R3 = 1600 Ω

R4 = 1000 Ω

RL = 560 Ω Q1 , Q2 , Q3 = 2 N1565 C1 = 10 µ F C2 = 100 µ F

FIGURE 11.2.5 Typical three-stage transistor amplifier.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.22

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.22

AMPLIFIERS AND OSCILLATORS

This amplifier is designed to operate over a range of −55 to +125°C, with an output voltage swing of 2 V peak to peak and frequency response down 3 dB at approximately 200 Hz and 2 MHz. The overall gain at 1000 Hz is nominally 88 dB at 25°C.

Biasing Methods The biasing scheme used in an amplifier determines the ultimate performance that can be realized. Conversely, an amplifier with poorly implemented biasing may suffer in performance, and be susceptible to catastrophic circuit failure owing to high stresses within the active devices. In view of the variation of parameters within the active devices, it is important that the amplifier function properly even when the initial and/or end-of-life parameters of the devices vary.

Electron-Tube Biasing Biasing is intended to maintain the quiescent currents and voltages of the electron tube at the prescribed levels. The tube-plate characteristics represent the biasing relations between the tube parameters. The principal bias parameters (steady-state plate and grid voltages) can be readily identified by the construction of a load line on the plate characteristic. The operating point Q is located at the intersection of the selected plate characteristic with the load line.

Transistor Biasing Although the methods of biasing a transistor-amplifier stage are in many respects similar to those of an electrontube amplifier, there are many different types of transistors, each characterized by different curves. Bipolar transistors are generally characterized by their collector and emitter families, while field-effect transistors have different characterizations. The npn transistor requires a positive base bias voltage and current (with respect to its emitter) for proper operation; the converse is true for a pnp transistor. Figure 11.2.6 illustrates a common biasing technique. A single power supply is used, and the transistor is self-biased with the unbypassed emitter resistor Re. Although a graphical solution of the value of Re could be found by referring to the collector-emitter curves, an iterative solution, described below, is also commonly used. Because the performance of the transistors depends on the collector current and collector-to-emitter voltage, they are often selected as starting conditions for biasing design. The unbypassed emitter resistor Re and collector resistor Rc, the primary voltage-gain-determining components, are determined next, taking into account other considerations such as the anticipated maximum signal level and available power supply Vcc. FIGURE 11.2.6 Capacitively coupled The last step is to determine the R1 and R2 values. npn transistor-amplifier stage.

Coupling Methods Transformer coupling and capacitance coupling are commonly used in transistor and electron-tube audio amplifiers. Direct coupling is also used in transistor stages and particularly in integrated transistor amplifiers. Capacitance coupling, referred to as RC coupling, is the most common method of coupling stages of an audio amplifier. The discrete-component transistorized amplifier stage shown in Fig. 11.2.6 serves as an example of RC coupling, where Ci and Co are the input and output coupling capacitors, respectively.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:53 AM

Page 11.23

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.7 Transformer-coupled pnp transistoramplifier stage.

11.23

FIGURE 11.2.8 Classes of amplifier operation, based on transistor characteristics.

Transformer coupling is commonly used to match the input and output impedances of electron-tube amplifier stages. Since the input impedance of an electron tube is very high at audio frequencies, the design of an electron-tube stage depends primarily on the transformer parameters. The much lower input impedances of transistors demand that many other factors be taken into account, and the design becomes more complex. The output-stage transformer coupling to a specific load is often the optimum method of realizing the best power match. Figure 11.2.7 illustrates a typical transformer-coupled transistor audio-amplifier stage. The direct-coupling approach is now also used for discrete-component transistorized amplifiers, and particularly in integrated amplifier versions. The level-shifting requirement is realized by selection from the host of available components, such as npn and pnp transistors and zener diodes. Since it is difficult to realize largesize capacitors via integrated-circuit techniques, special methods have been developed to direct-couple integrated amplifiers.

Classes A, B, AB, and C Operation The output or power stage of an amplifier is usually classified as operating class A, B, AB, or C, depending on the conduction characteristics of the active devices (see Fig. 11.1.5). These definitions can also apply to any intermediate amplifier stage. Figure 11.2.8 illustrates relations between the class of operation and conduction using transistor parameters. This figure would be essentially the same for an electron-tube amplifier with the tube plate current and grid voltage as the equivalent device parameters. Subscripts may be used to denote additional conduction characteristics of the device. For example, the electron-tube grid conduction can also be further classified as A1, to show that no grid current flows, or A2, to show that grid-current conduction exists during some portion of the cycle.

Push-Pull Amplifiers In a single-ended amplifier the active devices conduct continuously. The single-ended configuration is generally used in low-power applications, operated in class A. For example, preamplifiers and low-level amplifiers are generally operated single-ended, unless the output power levels necessitate the more efficient power handling of the push-pull circuit. In a push-pull configuration there are at least two active devices that alternately amplify the negative and positive cycles of the input waveform. The output connection to the load is most often transformer-coupled. An example of a transformer input and output in a push-pull amplifier is illustrated in Fig. 11.2.9. Direct-coupled push-pull amplifiers and capacitively coupled push-pull amplifiers are also feasible, as illustrated in Fig. 11.2.10.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.24

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.24

AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.9 Transformer-coupled push-pull transistor stage.

The active devices in push-pull are usually operated either in class B or AB because of the high powerconversion efficiency. Feedback techniques can be used to stabilize gain, stabilize biasing or operating points, minimize distortion, and the like.

Output Amplifiers The function of an audio output amplifier is to interface with the preceding amplifier stages and to provide the necessary drive to the load. Thus the output-amplifier designation does not uniquely identify a particular amplifier class. When several different types of amplifiers are cascaded between the signal source and its load, e.g., a high-power speaker, the last-stage amplifier is designated as the output amplifier. Because of the high power requirements, this amplifier is usually a push-pull type operating either in class B or AB.

Stereo Amplifiers A stereo amplifier provides two separate audio channels properly phased with respect to each other. The objective of this two-channel technique is to enhance the audio reproduction process, making it more realistic

FIGURE 11.2.10 (a) Direct- and (b) capacitively coupled push-pull stages.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.25

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.25

and lifelike. It is also feasible to extend the system to contain more than two channels of information. A stereo amplifier is a complete system that contains its power supply and other commonly required control functions. Each channel has its own preamplifier, medium-level stages, and output power stage, with different gain and frequency responses for each mode of operation, e.g., for tape, phonograph, CD, and so forth. The input signal is selected from the phonograph input connection, tape input, or a turner output. Special-purpose trims and controls are also used to optimize performance on each mode. The bandwidth of the amplifier extends to 20 kHz or higher.

AUDIO OSCILLATORS Robert J. McFadyen General Considerations In the strict sense, an audio oscillator is limited to frequencies from about 15 to 20,000 Hz, but a much wider frequency range is included in most oscillators used in audio measurements since knowledge of amplifier characteristics in the region above audibility is often required. For the production of sinusoidal waves, audio oscillators consist of an amplifier having a nonlinear power gain characteristic, with a path for regenerative feedback. Single- and multistage transistor amplifiers with LC or RC feedback networks are most often used. The term harmonic oscillator is used for these types. Relaxation oscillators, which may be designed to oscillate in the audio range, exhibit sharp transitions in the output voltages and currents. Relaxation oscillators are treated in Section 14. The instantaneous excursions of the operating point in a harmonic oscillator is restricted to the range where the circuit exhibits an impedance with a negative real part. The amplifier supplies the power, which is dissipated in the feedback path and the load. The regenerative feedback would cause the amplitude of oscillation to grow without bound were it not for the fact that the dynamic range of the amplifier is limited by circuit nonlinearities. Thus, in most sine-wave audio oscillators; the operating frequency is determined by passive-feedback elements, whereas the amplitude is controlled by the active-circuit design. Analytical expressions predicting the frequency and required starting conditions for oscillation can be derived using Bode’s amplifier feedback theory, and the stability theorem of Nyquist. Since this analytical approach is based on a linear-circuit model, the results are approximate but usually suitable for design of sinusoidal oscillators. No prediction on waveform amplitude results, since this is determined by nonlinear-circuit characteristics. Estimates of the waveform amplitude can be made from the bias and limiting levels of the active circuits. Separate limiters and AGC techniques are also useful for controlling the amplitude to a prescribed level. Graphical and nonlinear analysis methods can also be used for obtaining a prediction of the amplitude of oscillation. A general formulation suitable for a linear analysis of almost all audio oscillators can be derived from the feedback diagram in Fig. 11.2.11. Note that the amplifier internal feedback generator has been neglected: that is, y12A is assumed to be zero. This assumption of unilateral amplification is almost always valid in the audio range even for single-stage transistor amplifiers. The stability requirements for the circuit are derived from the closed-loop-gain expression Ac = A /(1 − Aβ )

(1)

where the gain A is treated as a negative quantity for an inverting amplifier. Infinite closed-loop gain occurs when AB is equal to unity, and this defines the oscillatory condition. In terms of the equivalent circuit parameters used in Fig. 11.2.1, 1 − Aβ = 1 − y21 A

y12 β ( y11 A + y11β )( y22 A + y22 β ) − y12 β y21β

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(2)

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.26

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.26

AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.11 Oscillator representations: (a) generalized feedback circuit; (b) equivalent y-parameter circuit.

In the audio range, y21A remains real, but the fractional portion of the function is complex because b is frequency-sensitive. Therefore, the open-loop gain Ab can be expressed in the general form Aβ = y21A

Ar + jAi Br + jBi

(3)

It follows from Nyquist’s stability theorem that this feedback system will be unstable if, first, the phase shift of Ab is zero and, second, the magnitude is equal to or greater than unity. Applying this criterion to Eq. (3) yields the following two conditions for oscillation: Ai Br − Ar Bi = 0

(4)

Br2 + Bi2 Ar2 + Ai2

(5)

2 y21 ≥

Equation (4) results from the phase condition and determines the frequency of oscillation. The inequality in Eq. (5) is the consequence of the magnitude constraint and defines the necessary condition for sustained oscillation. Equation (5) is evaluated at the oscillation frequency determined from Eq. (4). A large number of single-stage oscillators have been developed in both vacuum-tube and transistor versions. The transistor circuits followed by direct analogy from the earlier vacuum-tube circuits. In the following examples, transistor versions are illustrated, but the y-parameter equations apply to other devices as well.

LC Oscillators The Hartley oscillator circuit is one of the oldest forms: the transistor version is shown in Fig. 11.2.12. With the collector and base at opposite ends of the tuned circuit, the 180° phase relation is secured, and feedback occurs through mutual inductance between the two parts of the coil. The frequency and condition for oscillation are expressed in terms of the transistor y parameters and feedback inductance L, inductor coupling coefficient k, inductance ratio n, and tuning capacitance C. The frequency of oscillation is

ω2 =

1 LC (1 + 2k n + n) + nL2 (1 − k 2 )( y11A y22 A )

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.27

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.12 Hartley oscillator circuit.

FIGURE 11.2.13 circuit.

11.27

Colpitts oscillator

The condition for oscillation is y21 A ≥

y11 A + ny22 A + nω 2 LC (1 − k 2 )( y11 A y22 A ) k n + nω 2 LC (1 − k 2 )

The admittance parameters of the bias network R1, R2, and R3, as well as the reactance of bypass capacitor C and coupling capacitor C2, have been neglected. These admittances could be included in the amplifier y parameters in cases where their effect is not negligible. If n(1 − k 2 )( y11A y22 A ) C >> L 1 + 2k n + n

(6)

the frequency of oscillation will be essentially independent of transistor parameters. The transistor version of the Colpitts oscillator is shown in Fig. 11.2.13. Capacitors C and nC in combination with inductance L determine the resonant frequency of the circuit. A fraction of the current flowing in the tank circuit is regeneratively fed back to the base through the coupling capacitor C2. Bias resistors R1, R2, R3, and RL, as well as capacitors C1 and C2, are chosen so as not to affect the frequency or conditions for oscillation. The frequency of oscillation is

ω2 =

1  1 1 (y y ) 1 +  + LC  n  nC 2 11A 22 A

The condition for oscillation is y21A ≥ ω 2 LC (nY11A + Y22 A ) − ( y11A + y22 A ) Alternatively, the bias element admittances may be included in the amplifier y parameters. In the Colpitts circuit, if the ratio of C/L is chosen so that y y C >> 11A 22 A 1+ n L the frequency of oscillation is essentially determined by the tuned-circuit parameters.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(7)

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.28

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.28

AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.14 Tuned-collector oscillator.

FIGURE 11.2.15 RC oscillator with highpass feedback network.

Another oscillator configuration useful in the audio-frequency range is the tuned-collector circuit shown in Fig. 11.2.14. Here regenerative feedback is furnished via the transformer turns ratio N from the collector to base. The frequency of oscillation is

ω2 =

1 LC + N L y11A y22 A (1 − k 2 ) 2 2

The condition for oscillation is y22 A ≥

ω 2 NLCY11 A 1 ( N 2Y11 A + Y22 A ) − (1 − k 2 ) Nk k

If the ratio of C/L is such that C >> N 2 y11A y22 A (1 − k 2 ) L

(8)

the frequency of oscillation is specified by w2 = 1/LC. This circuit can be tuned over a wide range by varying the capacitor C and is compatible with simple biasing techniques.

RC Oscillators Audio sinusoidal oscillators can be designed using an RC ladder network (of three or more sections) as a feedback path in an amplifier. This scheme originally appeared in vacuum-tube circuits, but the principles have been directly extended to transistor design. RC phase-shift oscillators can be distinguished from tuned oscillators in that the feedback network has a relatively broad frequency-response characteristic. Typically, the phase-shift network has three RC sections of either a high- or a low-pass nature. Oscillation occurs at the frequency where the total phase shift is 180° when used with an inverting amplifier. Figures 11.2.15 and 11.2.16 show examples of high-pass and low-pass feedback-connection schemes. The amplifier is a differential pair with a transistor current source, a configuration which is common in integrated-circuit amplifiers. The output is obtained at the opposite collector from the feedback connection, since this minimizes external loading on the phase-shift network. The conditions for, and the frequency of, oscillation are derived, assuming that the input resistance of the amplifier, which loads the phase-shift network, has been adjusted to equal the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.29

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.29

resistance R. The load resistor RL is considered to be part of the amplifier output resistance, and it is included in y22A. The frequency of oscillation for the high-pass case is

ω2 =

y22 A 2C R(2 + 3Ry22 A ) 2

The condition for oscillation for the high-pass case is y21 A ≥

 1  1 + 5 R / RL R − − 3 R  ω 2 R 2C 2 RL 

The frequency of oscillation for the low-pass case is

ω=

R 1 6+4 RC RL

The condition for oscillation for the low-pass case is y21A ≥

R R2  1  23 + 29 + 4 2  R  RL RL 

Null-Network Oscillators In almost all respects null-network oscillators are superior to the RC phase-shift circuits described in the previous paragraph. While many null-network configurations are useful (including the bridged-T and twin-T), the Wien bridge design predominates. The general form for the Wien bridge oscillator is shown in Fig. 11.2.17. In the figure, an ideal differential voltage amplifier is assumed, i.e., one with infinite input impedance and zero output impedance.

Frequency of oscillation (M = N = 1):

ω0 =

1 RC

Condition for oscillation: A≥8= FIGURE 11.2.16 RC oscillator with low-pass feedback network.

3( R1 + R2 ) R1 − 2 R2

FIGURE 11.2.17 Wien bridge oscillator circuit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.30

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.30

AMPLIFIERS AND OSCILLATORS

An integrated-circuit operational amplifier that has a differential input stage is a practical approximation to this type of amplifier and is often used in bridge-oscillator designs. The Wien bridge is used as the feedback network, with positive feedback provided through the RC branches for regeneration and negative feedback through the resistor divider. Usually the resistor-divider network includes an amplitude-sensitive device in one or both arms which provides automatic correction for variation of the amplifier gain. Circuit elements such as a tungsten lamp, thermistor, and field-effect transistor used as the voltage-sensitive resistance element maintain a constant output level with a high degree of stability. Amplitude variations of less than ±1 percent over the band from 10 to 100,000 Hz are realizable. In addition, since the amplifier is never driven into the nonlinear region, harmonic distortion in the output waveform is minimized. For the connection shown in Fig. 11.2.17, an increase in V will cause a decrease in R2, restoring V to the original level. The lamp or thermistor have thermal time constants that set at a lower frequency limit on this method of amplitude control. When the period is comparable with the thermal time constant, the change in resistance over an individual cycle distorts the output waveform. There is an additional degree of freedom with the field-effect transistor, since the control voltage must be derived by a separate detector from the amplifier output. The time constant of the detector, and hence the resistor, are set by a capacitor, which can be chosen commensurate with the lowest oscillation frequency desired. At w0 the positive feedback predominates, but at harmonics of w0 the net negative feedback reduces the distortion components. Typically, the output waveform exhibits less than 1 percent total harmonic distortion. Distortion components well below 0.1 percent in the mid-audio-frequency range are also achieved. Unlike LC oscillators, in which the frequency is inversely proportional to the square root of L and C, in the Wien bridge w0 varies as 1/RC. Thus, a tuning range in excess of 10:1 is easily achieved. Continuous tuning within one decade is usually accomplished by varying both capacitors in the reactive feedback branch. Decade changes are normally accomplished by switching both resistors in the resistive arm. Component tracking problems are eased when the resistors and capacitors are chosen to be equal. Almost any three-terminal null network can be used for the reactive branch in the bridge; the resistor divider network adjusts the degree of imbalance in the manner described. Many of these networks lack the simplicity of the Wien bridge since they may require the tracking of three components for frequency tuning. For this reason networks such as the bridged-T and twin-T are usually restricted to fixed-tuned applications.

Low-Frequency Crystal Oscillators Quartz-crystal resonators are used where frequency stability is a primary concern. The frequency variations with both time and temperature are several orders of magnitude lower than obtainable in LC or RC oscillator circuits. The very high stiffness and elasticity of piezoelectric quartz make it possible to produce resonators extending from approximately 1 kHz to 200 MHz. The performance characteristics of crystal depend on both the particular cut and the mode of vibration (see Section 5). For convenience, each “cut-mode” combination is considered as a separate piezoelectric element, and the more commonly used elements have been designated with letter symbols. The audio-frequency range (above 1 kHz) is covered by elements J, H, N, and XY, as shown in Table 11.2.2. The temperature coefficients vary with frequency, i.e., with the crystal dimensions, and except for the H element, a parabolic frequency variation with temperature is observed. The H element is characterized by a

TABLE 11.2.2 Low-Frequency Crystal Elements Symbol

Cut

Mode of vibration

J H N XY

Duplex 5°X 5°X NT XY

Length-thickness flexure Length-width flexure Length-width flexure XY flexure

Frequency range, kHz 0.9–10 10–50 4–200 8–40

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.31

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.31

negative temperature coefficient on the order of –10 ppm/°C. The other elements have lower temperature coefficients, which at some temperatures are zero because of the parabolic nature of the frequency-deviation curve. The point where the zero temperature coefficient occurs is adjustable and varies with frequency. At temperatures below this point the coefficient is positive, and at higher temperatures it is negative. On the slope of the curves the temperature coefficients for the N and XY elements are on the order of 2 ppm/°C, whereas the J element is about double at 4 ppm/°C. Although the various elements differ in both cut and mode of vibration, the electric equivalent circuit remains invariant. The schematic representation and the lumped constant equivalent circuit are shown in Fig. 11.2.18. As is characteristic of most mechanical resonators, the motional inductance L resulting from the mechanical mass in motion is large relative to that obtainable from coils. The extreme stiffness of quartz makes for very small values of the motional capacitance C, and the very high order of elasticity allows the motional resistance R to be relatively low. The shunt capacitance C0 is the electrostatic capacitance existing between crystal electrodes with the quartz plate as the dielectric and is present whether or not the crystal is in mechanical motion. Some FIGURE 11.2.18 Symbol and equivalent circuit of typical values for these equivalent-circuit parameters are a quartz crystal. shown in Table 11.2.3. The H element can have a high Q value when mounted in a vacuum enclosure; however, it then has the poorest temperature coefficient. The N element exhibits an excellent temperature characteristic, but the piezoelectric activity is rather low, so that special care is required when it is used in oscillator circuits. The J and XY elements operate well in low-frequency oscillator designs, the latter having lower temperature drift. For the same frequency the XY crystal is about 40 percent longer than the J element. Where extreme frequency stability is required, the crystals are usually controlled to a constant temperature. The reactance curve of a quartz resonator is shown in Fig. 11.2.19. The zero occurs at the frequency fs, which corresponds to series resonance of the mechanical L and C equivalences. The antiresonant frequency fp is dependent on the interelectrode capacitance C0. Between fs and fp the crystal is inductive and this frequency range is normally referred to as the crystal bandwidth BW = fs /(2C0 /C )

(9)

In oscillator circuits the crystal can be used as either a series or a parallel resonator. At series resonance the crystal impedance is purely resistive, but in the parallel mode the crystal is operated between fs and fp and is therefore inductive. For oscillator applications the circuit capacitance shunting the crystal must also be included when specifying the crystal, since it is part of the resonant circuit. If a capacitor CL, that is, a negative reactance, is placed in series with a crystal, the combination will series-resonate at the frequency fR of zero reactance for the combination.   1 f R = fs 1 +   (2C0 / C )(1 + CL / C0 ) 

(10)

TABLE 11.2.3 Typical Crystal Parameter Values Element

Frequency, kHz

L, H

C, pF

R, kΩ

C0, pF

Q, approx

J H N XY

10 10 10 10

8,000 2,500 8,000 12,000

0.03 0.1 0.03 0.02

50 10 75 30

6 75 30 20

20,000 20,000 10,000 30,000

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.32

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.32

AMPLIFIERS AND OSCILLATORS

FIGURE 11.2.19 Quartz-crystal reactance curve.

FIGURE 11.2.20 Crystal oscillator using an integrated-circuit operational amplifier.

The operating frequency can vary in value due to changes in the load capacitance, and this variation is prescribed by ∆ fR =

fs ∆CL /C0 2C0 /C (1 + CL /C0 )2

(11)

This effect can be used to “pull” the crystal for initial alignment, or if the external capacitor is a voltagecontrollable device, a VCO with a range of about ±0.01 percent can be constructed. Phase changes in the amplifier will also give rise to frequency shifts since the total phase around the loop must remain at 0° to maintain oscillation. Although single-stage transistor designs are possible, more flexibility is available in the circuit of Fig. 11.2.20, which uses an integrated-circuit operational amplifier for the gain element. The crystal is operated in the series mode, and the amplifier gain is precisely controlled by the negative-feedback divider R2 and R3. The output will be sinusoidal if R  VD R1  1 + 3  < Vlim R1 + R  R2 

(12)

where VD is the limiting diode forward voltage drop and Vlim is the limiting level of amplifier output. Low-cost electronic wristwatches use quartz crystals for establishing a high degree of timekeeping accuracy. A high-quality mechanical watch may have a yearly accuracy on the order of 20 min, whereas many quartz watches are guaranteed to vary less than 1 min/year. Generally the XY crystal is used, but other types are continually being developed to improve accuracy, reduce size, and lower manufacturing cost. The active gain elements for the oscillator are part of the integrated circuit that contains the electronics for the watch functions. The flexure or tuning-fork frequency is set generally to 32,768 Hz, which is 215 Hz. This frequency reference is divided down on the integrated circuit to provide seconds, minutes, hours, day of the week, date, month, and so forth. A logic gate or inverter is often used as the gain element in the oscillator circuit. A typical configuration is shown in Fig. 11.2.21. The resistor R1 is used to bias the logic inverter for class A amplifier operation. The resistor R2 helps reduce both voltage sensitivity of the network and crystal power dissipation. The combination of R2 and C2 provides added phase shift for good oscillator startup. The series combination of capacitors

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.33

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.33

C1 and C2 provides the parallel load for the crystal. C1 can be made tunable for precise setting of the crystal oscillation frequency. The inverter provides the necessary gain and 180° phase shift. The p network consisting of the capacitors and the crystal provides the additional 180° phase shift needed to satisfy the conditions for oscillation.

Frequency Stability Many factors contribute to the ability of an oscillator to hold a constant output frequency over a period of time and range from short-term effects, caused by random noise, to longer-term variations, caused by circuit parameter dependence on temperature, bias voltage, and the like. In addition to the temperature and aging effects of the frequency-determining elements, nonlinearities, impedance loading, and amplifier phase variations also contribute to instability. Harmonics generated by circuit nonlinearities are passed through the feedback network, with various phase shifts, to the input of the amplifier. Intermodulation of the harmonic frequencies produces a fundamental frequency component that differs in phase from the amplifier output. Since the condition Ab = 1 must be satisfied, the frequency of oscillation will shift so that the network phase shift cancels the phase perturbation caused by the nonlinearity. Therefore, the frequency of oscillation is influenced by an unpredictable amplifier characteristic, namely, the FIGURE 11.2.21 Crystal oscillator using a logic gate saturation nonlinearity. This effect is negligible in the Wien for the gain element. bridge oscillator, where automatic level control keeps harmonic distortion to a minimum. The relationships shown in Fig. 11.2.17 were derived assuming that the amplifier does not load the bridge circuit on either the input or output sides. In the practical sense this is never true, and changes in the input and output impedances will load the bridge and cause frequency variations to occur. Another source of frequency instability is small phase changes in the amplifier. The effect is minimized by using a network with a large stability factor, defined by S=

dφ dω / ω 0

(13) ω = ω0

For the Wien bridge oscillator, which has amplitude-sensitive resistive feedback, the RC impedances can be optimized to provide a maximum stability factor value. As shown in Fig. 11.2.17, this amounts to choosing proper values for M and N. The maximum stability-factor value is A/4, and it occurs for N = 1/2 and M = 2. Most often the bridge is used with equal resistor and capacitor values; that is, M = N = 1, in which case the stability factor is 2A/9. This represents only a slight degradation from the optimum.

Synchronization It is often desirable to lock the oscillator frequency to an input reference. Usually this is done by injecting sufficient energy at the reference frequency into the oscillator circuit. When the oscillator is tuned sufficiently close to the reference, natural oscillations cease and the synchronization signal is amplified to the output. Thus the circuit appears to oscillate at the injected signal frequency. The injected reference is amplitude-stabilized by the AGC or limiting circuit in the same manner as the natural oscillation. The frequency range over which locking can occur is a linear function of the amplitude of the injected signal. Thus, as the synchronization frequency is moved away from the natural oscillator frequency, the amplitude threshold to maintain lock increases. The phase

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.34

AUDIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.34

AMPLIFIERS AND OSCILLATORS

error between the input reference and the oscillator output will also deviate as the input frequency varies from the natural frequency. Methods for injecting the lock signal vary and depend on the type of oscillator under consideration. For example, LC oscillators may have signals coupled directly to the tank circuit, whereas the lock signal for the Wien network is usually coupled into the center of the resistive side of the bridge, i.e., the junction of R1 and R2 in Fig. 11.2.17. If the natural frequency of oscillation can be voltage controlled, synchronization can be accomplished with a phase-locked loop. Replacing both R’s with field-effect transistors, or alternatively shunting both C’s with varicaps, provides an effective means for voltage controlling the frequency of the Wien bridge oscillator. Although more complicated in structure, the phase-locked loop is more versatile and has many diverse applications.

Piezoelectric Annunciators Another important class of audio oscillators uses piezoelectric elements for both frequency control and audiblesound generation. Because of their low cost and high efficiency these devices are finding increasing use in smoke detectors, burglar alarms, and other warming devices. Annunciators using these elements typically produce a sound level in excess of 85 dB measured at a distance of 10 ft. Usually the element consists of a thin brass disk to which a piezoelectric material has been attached. When an electric signal is applied across its surfaces, the piezoceramic disk attempts to change diameter. The brass disk to which it is bonded acts as a spring restraining force on one surface of the ceramic. The brass plate serves as one electrode for applying the electric signal to the ceramic. On the other surface a fired-on silver paste is used as an electrode. The restraining action of the brass disk causes the assembly to change from a flat to a convex shape. When the polarity of the electric signal reverses, the assembly flexes in the other direction to a concave shape. When the device is properly mounted in a suitable horn structure, this motion is used to produce high-level sound waves. One useful method is to clamp the disk at nodal points, i.e., at a distance from the center of the disk where mechanical motion is at a vibrational null. The piezoelectric assembly will produce sound levels more efficiently when excited near the series-resonant frequency. The simple equivalent circuit used for the quartz crystal (Fig. 11.2.18) also applies to the piezoceramic assembly for frequencies near resonance. Generally the piezoelectric element is used as the frequency-determining element in an audio oscillator. The advantage of this method is that the excitation frequency is inherently near the optimum value, since it is self-excited. A typical piezoceramic 1-in diameter mounted on a 13/4-in brass disk would have the following equivalent values: C0 = 0.02 µF, C = 0.0015 µF, L = 2 H, R = 500 Ω, Q = 75, fs = 2.9 kHz, and fp = 3.0 kHz. A basic oscillator, capable of producing high-level sound, is shown in Fig. 11.2.22. The inductor L1 provides a dc path to the transistor and broadly tunes the parallel input capacitance of the piezoelectric element. C1 is an optional capacitor which adds to the input shunt capacitance for optimizing the drive impedance to the element. Resistor R1 provides base-current bias to the transistor so that oscillation can start. The element has a third small electrode etched in the silver pattern. It is used to derive a feedback signal which, when resistively loaded by R1, provides an in-phase signal to the base for sustaining circuit oscillation. The circuit operates like a blocking oscillator in that the transistor is switched on and off and the collector voltage can fly above B-plus because of the inductor L1. FIGURE 11.2.22 Basic audio annunciator oscillaThe collector load consisting of L1 and C1 can be replaced tor circuit using a thin-disk piezoelectric transducer. with a resistor, in which case the audio output will be less.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.35

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 11.3

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS G. Burton Harrold, John W. Lunden, Jennifer E. Doyle, Chang S. Kim, Conrad E. Nelson, Gunter K. Wessel, Stephen W. Tehon, Y. J. Lin, Wen-Chung Wang, Harold W. Lord

RADIO-FREQUENCY AMPLIFIERS G. Burton Harrold Small-Signal RF Amplifiers The prime considerations in the design of first-stage rf amplifiers are gain and noise figure. As a rule, the gain of the first rf stage should be greater than 10 dB, so that subsequent stages contribute little to the overall amplifier noise figure. The trade-off between amplifier cost and noise figure is an important design consideration. For example, if the environment in which the rf amplifier operates is noisy, it is uneconomic to demand the ultimate in noise performance. Conversely, where a direct trade-off exists in transmitter power versus amplifier noise performance, as it does in many space applications, money spent to obtain the best possible noise figure is fully justified. Another consideration in many systems is the input-output impedance match of the rf amplifier. For example, TV cable distribution systems require an amplifier whose input and output match produce little or no line reflections. The performance of many rf amplifiers is also specified in handling large signals, to minimize cross- and intermodulation products in the output. The wide acceptance of transistors has placed an additional constraint on first-stage rf amplifiers, since many rf transistors having low noise, high gain, and high frequency response are susceptible to burnout and must be protected to prevent destruction in the presence of high-level input signals. Another common requirement is that first rf stages be gain-controlled by automatic gain control (AGC) voltage. The amount of gain control and the linearity of control are system parameters. Many rf amplifiers have the additional requirement that they be tuned over a range of frequencies. In most receivers, regardless of configuration, local-oscillator leakage back to the input is strictly controlled by government regulation. Finally, the rf amplifier must be stable under all conditions of operation. Device Evaluation for RF Amplifiers An important consideration in an rf amplifier is the choice of active device. This information on device parameters can often be found in published data sheets. If parameter data are not available or not a suitable operating point, the following characterization techniques can be used. Network Analyzers. The development of the modern network analyzer has eliminated much of the work in device and circuit evaluation. These systems automate sweep frequency measurements of the complex device or 11.35 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.36

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.36

AMPLIFIERS AND OSCILLATORS

FIGURE 11.3.1 Use of the Rx meter in device characterization.

circuit parameters and avoid the tedious calculations that were previously required. The range of measurement frequencies extends from a few hertz to 60 GHz. Network analyzers perform the modeling function by measuring the transfer and impedance function of the device by means of sine-wave excitation. These transfer voltages/currents and the reflected voltages/currents are then separated, and the proper ratios are formed to define the device parameters. There results are then displayed graphically and/or in a digital form for designer use. Newer systems allow these data to be transferred directly to computerized design programs, thus automating the total design process. The principle of actual operation is similar to that described below under Vector Voltmeter. Rx Meter.* This measurement technique is usually employed at frequencies below 200 MHz for active devices that have high input and output impedance. The technique is summarized in Fig. 11.3.1 with assumptions tacit in these measurements. The biasing techniques are shown. In particular, the measurement of h22b requires a very large resistor Re to be inserted in the emitter, and this may cause difficulty in achieving the proper biasing. Care should be taken to prevent burnout of the bridge when a large dc bias is applied. The bridge’s drive to the active device may be reduced for more accurate measurement by varying the B-plus voltage applied to the internal oscillator. Vector Voltmeter.* This characterization technique measures the S parameters; see Fig. 11.1.9. The measurement consists in inserting the device in a transmission line, usually 50 Ω characteristic impedance, and measuring the incident and reflected voltages at the two ports of the device.

*Trademark of the Hewlett Packard Co.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.37

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.37

FIGURE 11.3.2 Noise-measurement techniques: (a) at low frequencies; (b) at high frequencies.

Several other techniques include the use of the H-P 8743 reflectometer, the general radio bridge GR 1607, the Rhode-Schwartz diagraph, and the H-P type 8510 microwave network analyzer to measure device parameters automatically from 45 MHz to 100 GHz with display and printout features. Noise in RF Amplifiers A common technique employing a noise source to measure the noise performance of an rf amplifier is shown in Fig. 11.3.2. Initially the external noise source (a temperature-limited diode) is turned off, the 3dB pad short-circuited, and the reading on the output power meter recorded. The 3-dB pad is then inserted, the noise source is turned on, and its output increased until a reading equal to the previous one is obtained. The noise figure can then be read directly from the noise source, or calculated from 1 plus the added noise per unit bandwidth divided by the standard noise power available KT0, where T0 = 290 K and K = Boltzmann’s constant = 1.38 × 10–23 J/K. At higher frequencies, the use of a temperature-limited diode is not practical, and a gas-discharge tube or a hot-cold noise source is employed. The Y-factor technique of measurement is used. The output from the device to be measured is put into a mixer, and the noise output converted to a 30- or 60-MHz center-frequency (i.f.) output. A precision attenuator is then inserted between this i.f. output and the power-measuring device. The attenuator is adjusted to give the same power reading for two different conditions of noise power output represented by effective temperatures T1 and T2. The Y factor is the difference in decibels between the two precision attenuator values needed to maintain the same power-meter reading. The noise factor is F=

(T2 / 290) − T1Y / 290 +1 Y −1

where T1 = effective temperature at reference condition 1 T2 = effective temperature of reference condition 2 Y = decibel reading defined in the text, converted to a numerical ratio

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.38

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.38

AMPLIFIERS AND OSCILLATORS

In applying this technique it is often necessary to correct for the second-stage noise. This is done by use of the cascade formula F1 = FT − (F2 − 1)/G1 where F1 = noise factor of first stage FT = overall noise factor measured F2 = noise factor of second-stage mixer and i.f. amplifier G1 = available gain of first stage Large-Signal Performance of RF Amplifiers The large-signal performance of an rf amplifier can be specified in many ways. A common technique is to specify the input where the departure from a straight-line input-output characteristic is 1 dB. This point is commonly called the 1-dB compression point. The greater the input before this compression point is reached, the better the large-signal performance. Another method of rating an rf amplifier is in terms of its third-order intermodulation performance. Here two different frequencies, f1 and f2, of equal powers, p1 and p2, are inserted into the rf amplifier, and the third frequency, internally generated, 2f1 – f2 or 2f2 − f1, has its power p12 measured. All three frequencies must be in the amplifier passband. With the intermodulation power p12 referred to the output, the following equation can be written: P12 = 2P1 + P2 + K12 where P12 = intermodulation output power at 2f1 − f2 or 2f2 − f1 P1 = output power at input frequency f1 P2 = output power at input frequency f2, all in decibels referred to (0 dBm) K12 = constant associated with the particular device The value of K12 in the above formula can be used to rate the performance of various device choices. Higher orders of intermodulation products can also be used. A third measure of large-signal performance commonly used is that of cross-modulation. In this instance, a carrier at fD with no modulation is inserted into the amplifier. A receiver is then placed at the output and tuned to this unmodulated carrier. A second carrier at f1 with amplitude-modulation index MI is then added. The power of PI of fI is increased, and its modulation is partially transferred to fD. The equation becomes 10 log (MK /MI) = PI + K where MK = cross-modulation index of originally unmodulated signal at fD MI = modulation index of signal FI PI = output power of signal at fI, all in decibels referred to 1 mW (0 dBm) K = cross-modulation constant Maximum Input Power In addition to the large-signal performance, the maximum power of voltage input into an rf amplifier is specified, with a requirement that device burnout must not occur at this input. There are two ways of specifying this input: by a stated pulse of energy or by a requirement to withstand a continuously applied large signal. It is also common to specify the time required to unblock the amplifier after removal of the large input. With the increased use of field effect transistors (FETs especially) having good noise performance, these overload characteristics have become a severe problem. In many cases, conventional or zener diodes, in a back-to-back configuration shunting the input, are used to reduce the amount of power the input of the active devices must dissipate. RF Amplifiers in Receivers RF amplifiers intended for the first stages of receivers have additional restrictions placed on them. In most cases, such amplifiers are tunable across a band of frequencies with one or more tuned circuits. The tuned

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.39

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.39

circuits must track across the frequency band, and in the case of the superheterodyne, tracking of the local oscillator is necessary so that a constant frequency difference (i.f.) is maintained. The receiver’s rf section can be tracked with the local oscillator by the two- or the three-point method, i.e., with zero error in the tracking at either two or three points. A second consideration peculiar to rf amplifiers used for receivers is the AGC. This requirement is often stated by specifying a low-level rf input to the receiver and noting the power out. The rf signal input is then increased with the AGC applied until the output power has increased a predetermined amount. This becomes a measure of the AGC effectiveness. The AGC performance can also be measured by plotting a curve of rf input versus AGC voltage needed to maintain constant output, compared with the desired performance. A third consideration in superheterodynes is the leakage of the local oscillator in the receiver to the outside. This spurious radiation is specified by the Federal Communications Commission (FCC) in the United States.

Design Using Immittance and Hybrid Parameters The general gain and input-output impedance of an amplifier can be formulated, in terms of the Z or Y parameters, to be Yin = y11 −

y12 y21 y22 + yL

Yout = y22 −

y12 y21 y11 + ys

where yL = load admittance ys = source admittance Yin = input admittance Yout = output admittance GT = transducer gain and the transducer gain is GT =

4 Re ys Re yL | y21 |2 | ( y11 + ys )( y22 + yL ) − y12 y21 ) |2

for the y parameters, and interchange of z or y is allowed. The stability of the circuit can be determined by either Linvill’s C or Stern’s k factor as defined below. Using the y parameters, yij = gik + jBik, these are C=

Linvill:

| y12 y21 | 2 g11g22 − Re y12 y21

where C < 1 for stability does not include effects of load and source admittance. Stern:

k=

2( g11 + gs )( g22 + gL ) | y12 y21 | + Re y12 y21

where k > 1 for stability gL = load conductance gs = source conductance The preceding C factor defines only unconditional stability; i.e., no combination of load and source impedance will give instability. There is an invariant quantity K defined as K=

2 Re γ 11 Re γ 22 − Re γ 12γ 21 |γ 21γ 12 |

Re γ 11 > 0 Re γ 22 > 0

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.40

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.40

AMPLIFIERS AND OSCILLATORS

where g represents either the y, z, g, or h parameters, and K > 1 denotes stability. This quantity K has then been used to define maximum available power gain Gmax (only if K > 1) Gmax = |γ 21 / γ 12 | ( K − K 2 − 1) To obtain this gain, the source and load immittance are found to be (K > 1)

γs =

γ 12γ 21 + |γ 12γ 21 | ( K + K 2 − 1) − γ 11 2 Re γ 22

γ s = sourrce immittance

γL =

γ 12γ 21 + |γ 12γ 21 | ( K + K 2 − 1) − γ 22 2 Re γ 11

γ L = load immittance

The procedure is to calculate the K factor, and if K > 1, calculate Gmax, gs, and gL. If K < 1, the circuit can be modified either by use of feedback or by adding immittances to the input-output.

Design Using S Parameters The advent of automatic test equipment and the extension of vacuum tubes and transistors to be gigahertz frequency range have led to design procedures using the S parameters. Following the previous discussion, the input and output reflection coefficient can be defined as pin = S11 + pL pout = S22 + p

S12 S21 1 − pL S22

S12 S21 1 − pS11

pL =

ZL − Z0 ZL + Z0

ps =

Zs − Z0 Zs + Z0

where Z0 = characteristic impedance pin = input reflection coefficient pout = output reflection coefficient The transducer gain can be written Gtransducer =

| S21 |2 (1− | ps |2 )(1− | pL |2 ) | (1 − S11 ps )(1 − S22 pL ) − S21S12 ps pL |2

The unconditional stability of the amplifier can be defined by requiring the input (output) impedance to have a positive real part for any load (source) impedance having a positive real part. This requirement gives the following criterion: | S11 |2 + | S12 S21 |< 1 and

η=

| S22 |2 + | S12 /S11 |< 1

1− | ∆ |2 − | S11 |2 − | S22 |2 >1 2 | S12 S21 |

∆ s = S11S22 − S12 S21

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.41

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.41

Similarly, the maximum transducer gain, for h > 1, becomes Gmax transducer = | S21 / S12 | (η ± η2 − 1) (positive sign when |S22|2 − |S11|2 − 1 + |∆s|2 > 0) for conditions listed above. The source and load to provide conjugate match to the amplifier when h > 1 are the solutions of the following equations, which give |ps|, and |pL| less than 1 pms = C1*

B1 ± B12 − 4 |C1 |2 2 |C1 |2

pmL = C2*

B2 ± B22 − 4 |C2 |2 2 |C2 |2

where B2 = 1 + |S22|2 − |S11|2 − |∆s|2 B1 = 1 + |S11|2 − |S22|2 − |∆s|2 * * C1 = S11 − ∆ s S22 C2 = S22 − ∆ s S11 the star (∗) denoting conjugate. If |h| > 1 but h is negative or |h| < 1, it is not possible to match simultaneously the two-port with real source and load admittance. Both graphical techniques and computer programs are available to aid in the design of rf amplifiers.

Intermediate-Frequency Amplifiers Intermediate-frequency amplifiers consist of a cascade of a number of stages whose frequency response is determined either by a filter or by tuned interstages. The design of the individual active stages follows the techniques discussed earlier, but the interstages become important for frequency shaping. There are various forms of interstage networks; several important cases are discussed below. Synchronous-Tuned Interstages. The simplest forms of tuned interstages are synchronously tuned circuits. The two common types are the single- and double-tuned interstage. The governing equations are: 1. Single-tuned interstage (Fig. 11.3.3a): A( jω ) = − Ar

1 1 + jQL (ω / ω 0 − ω 0 / ω )

where QL = loaded Q of the tuned circuit greater than 10 w0 = resonance frequency of the tuned circuit = 1 √LC w = frequency variable Ar = midband gain equal to gm times the midband impedance level For an n-stage amplifier with n interstages,

AT = A ( jω ) = n

Arn

  ω2 −ω2 2 0 1 +    Bω    

− n/ 2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.42

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.42

AMPLIFIERS AND OSCILLATORS

where B = w0/QL = single-stage bandwidth n = number of stages w0 = center frequencies QL = loaded Q Bn = B 21/n − 1 is the overall bandwidth reduction owing to n cascades 2. Double-tuned interstage (Fig. 11.3.3b):

A( jω ) =

jω 4 3 2 C1C2 (1 − k 2 ) L1L2 ω − ja1ω − a2ω + ja3ω + a4 gm k

(for a single double-tuned stage), where

FIGURE 11.3.3 Interstage coupling circuits: (a) single-tuned; (b) double-tuned.

 1 1 a1 = ω r  +   Q Q  1 2 a3 =

a2 =

ω r2 1 + ω 2 + ω 22 Q1Q2 1 − k 2 1

ω r  ω 22 ω12   +  1 + k 2  Q1 Q2 

(

a4 =

)

ω12ω 22 1− k2

The circuit parameters are R1 = total resistance primary side C1 = total capacitance primary side L1 = total inductance primary side R2 = total resistance secondary side C2 = total capacitance secondary side L2 = total inductance secondary side M = mutual inductance = k√L1L2 k = coefficient of coupling

wr = resonant frequency of amplifier w1 = 1/√L1C1 w1 = 1/√L2C2 Q1 = primary Q at wr = wrC1R1 Q2 = secondary Q at wr = wrC2R2 gm = transconductance of active device at midband frequency

Simplification. If w1 = w2 = w0, that is, primary and secondary tuned to the same frequency, then

ωr = ω 0/ 1− k2 is the resonant frequency of the amplifier and

A( jω r ) =

+ jkgm R1R2 Q1Q2 (k 2 + 1/ Q1Q2 )

is the gain at this resonant frequency. For maximum gain, kc =

1 Q1Q2

= critical coupling

and for maximum flatness,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

`

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.43

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.43

FIGURE 11.3.4 Selective curves for two identical circuits in a double-tuned interstage circuit, at various values of k/kc.

kT =

1 1 1   2 + 2  = transitional coupling 2  Q1 Q2 

If k is increased beyond kT, a double-humped response is obtained. Overall bandwidth of an n-stage amplifier having equal Q circuits with transitional coupled interstages whose bandwidth is B is Bn = B(21/n − 1)1/4 The governing equations for the double-tuned-interstage case are shown above. The response for various degrees of coupling related to kT = kC in the equal-coil-Q case is shown in Fig. 11.3.4.

Maximally Flat Staggered Interstage Coupling This type of coupling consists of n single-tuned interstages that are cascaded and adjusted so that the overall gain function is maximally flat. The overall cascade bandwidth is Bn, the center frequency of the cascade is wc, and each stage is a single-tuned circuit whose bandwidth B and center frequency are determined from Table 11.3.1. The gain of each stage at cascade center frequency is A(jwc) = −gm/CT [B + j(w 2c − w 20 /wc], where CT = sum of output capacitance and input capacitance to next stage and wiring capacitance of cascade, B = stage bandwidth, w0 = center frequency of stage, and wc = center frequency of cascade.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.44

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.44

AMPLIFIERS AND OSCILLATORS

TABLE 11.3.1 n

Design Data for Maximally Flat Staggered n-tuples Name of circuit

2 3

Staggered pair Staggered triple

4

Staggered quadruple

5

Staggered quintuple

6

Staggered sextuple

7

Staggered septuple

No. of stages

Center frequency of stage

Stage bandwidth

2 2 1 2 2 2 2 1 2 2 2 2 2 2 1

wc ± 0.35Bn wc ± 0.43Bn wc wc ± 0.46Bn wc ± 0.19Bn wc ± 0.29Bn wc ± 0.48Bn wc wc ± 0.48Bn wc ± 0.35Bn wc ± 0.13Bn wc ± 0.49Bn wc ± 0.39Bn wc ± 0.22Bn wc

0.71Bn 0.50Bn 1.00Bn 0.38Bn 0.92Bn 0.81Bn 0.26Bn 1.00Bn 0.26Bn 0.71Bn 0.97Bn 0.22Bn 0.62Bn 0.90Bn 1.00Bn

For QL > 20

RADIO-FREQUENCY OSCILLATORS G. Burton Harrold General Considerations Oscillators at rf frequencies are usually of the class A sine-wave-output type. RF oscillators (in common with audio oscillators) may be considered either as one-port networks that exhibit a negative real component at the input or as two-port-type networks consisting of an amplifier and a frequency-sensitive passive network that couples back to the input port of the amplifier. It can be shown that the latter type of feedback oscillator also has a negative resistance at one port. This negative resistance is of a dynamic nature and is best defined as the ratio between the fundamental components of voltage and current. The sensitivity of the oscillator’s frequency is directly dependent on the effective Q of the frequencydetermining element and the sensitivity of the amplifier to variations in temperature, voltage variation, and aging. For example, the effective Q of the frequency-determining element is important because the percentage change in frequency required to produce the compensating phase shift in a feedback oscillator is inversely proportional to the circuit Q, thus the larger the effective Q the greater the frequency stability. The load on an oscillator is also critical to the frequency stability since it affects the effective Q and in many cases the oscillator is followed by a buffer stage for isolation. It is also desirable to provide some means of stabilizing the oscillator’s operating point, either by a regulated supply, dc feedback for bias stabilization, or oscillator self-biasing schemes such as grid-leak bias. This stabilizes not only the frequency but also the output amplitude, by tending to compensate any drift in the active device’s parameters. It is also necessary to eliminate the harmonics in the output since they give rise to cross-modulation products producing currents at the fundamental frequency that are not necessarily in phase with the dominant oscillating mode. The use of high-Q circuits and the control of the nonlinearity helps in controlling harmonic output. Negative-Resistance Oscillators The analysis of the negative-impedance oscillator is shown in Fig. 11.3.5. The frequency of oscillation at buildup is not completely determined by the LC circuit but has a component that is dependent upon the circuit resistance. At steady state, the frequency of oscillation is a function of 1 + R/Riv or 1 + Ric/R, depending on the particular circuit where the ratios R/Ric, Riv/R are usually chosen to be small. While R is a fixed function of the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.45

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.45

FIGURE 11.3.5 General analysis of negative-resistance oscillators.

loading, Ric or Riv/R must change with amplitude during oscillator buildup, so that the condition of a = 0 can be reached. Thus Riv, Ric cannot be constant but are dynamic impedances defined as the ratio of the fundamental voltage across the elements to the fundamental current into the element. The type of dc load for biasing and the resonant circuit required for the proper operation of a negativeresistance oscillator depend on the type of active element. R must be less than |Riv| or R must be greater than |Ric| in order for oscillation to build up and be sustained. The detailed analysis of the steady-state oscillator amplitude and frequency can be undertaken by graphical techniques. The magnitude of Gi or Ri is expressed in terms of its voltage dependence. Care must be taken with this representation, since the shape of the Gi or Ri curve depends on the initial bias point. The analysis of negative-resistance oscillators can now be performed by means of admittance diagrams. The assumption for oscillation to be sustaining is that the negative-resistance element, having admittance yi, must equal –yc, the external circuit admittance. This can be summarized by Gi = −Gc and Bi = −Bc. A typical set of admittance curves is shown in Fig. 11.3.6. In this construction, it is assumed that Bi = −Bc, even during the oscillator buildup. Also shown is the fact that Gi at zero amplitude must be larger than Gc so that the oscillator can be started, that is, a > 0, and that it may be possible to have two or more stable modes of oscillation. Feedback Oscillators Several techniques exist for the analysis of feedback oscillators. In the generalized treatment, the active element is represented by its y parameters whose element values are at the frequency of interest, having magnitudes

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.46

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.46

AMPLIFIERS AND OSCILLATORS

FIGURE 11.3.6 Admittance diagram of voltage-stable negative-resistance oscillators: (a) self-starting case, a > 0; (b) circuit starts oscillating only if externally excited beyond point 1.

defined by the ratio of the fundamental current divided by fundamental voltage. The general block diagram and equations are shown in Fig. 11.3.7. Solution of the equations given yields information on the oscillator’s performance. In particular, equating the real and imaginary parts of the characteristic equation gives information on amplitude and frequency of oscillation. In many instances, many simplifications to these equations can be made. For example, if y11 and y12 are made small (as in vacuum-tube amplifiers), then y21 = −(1/z21)(y22z11 + 1) = −1/Z This equation can be solved by equating the real and imaginary terms to zero to find the frequency and the criterion for oscillation of constant amplitude. This equation can also be used to draw an admittance diagram for oscillator analysis.

FIGURE 11.3.7 General analysis of feedback oscillators.

FIGURE 11.3.8 Admittance diagram of feedback oscillator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.47

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.47

FIGURE 11.3.9 S-parameter and analysis of oscillators.

These admittance diagrams are similar to those discussed under negative-resistance oscillators. The technique is illustrated in Fig. 11.3.8. At higher frequencies, the S parameters can also be used to design oscillators (Fig. 11.3.9). The basis for the oscillator is that the magnitude of the input reflection coefficient must be greater than unity, causing the circuit to be potentially unstable (in other words, it has a negative real part for the input impedance). The input reflection coefficient with a ΓL output termination is S11 ′ = S11 +

S12 S21Γ L 1 − S22 Γ L

Either by additional feedback or adjustment of ΓL it is possible to make |S¢11| > 1. Next, establishing a load Γs such that it reflects all the energy incident on it will cause the circuit to oscillate. This criterion is stated as ΓLS¢11 = 1 at the frequency of oscillation. This technique can be applied graphically, using a Smith chart as before. Here the reciprocal of S¢11 is plotted as a function of frequency since S¢11 > 1. Now choose either a parallel or a series-tuned circuit and plot its Γs. If f1 is the frequency common to 1/S¢11 and Γs, and satisfies the above criterion, the circuit will oscillate at this point.

BROADBAND AMPLIFIERS John W. Lunden, Jennifer E. Doyle Introduction In broadband amplifiers signals are amplified so as to preserve over a wide band of frequencies such characteristics as signal amplitude, gain response, phase shift, delay, distortion, and efficiency. The width of the band depends on the active device used, the frequency range, and power level in the current state of the art. As a general rule, above 100 MHz, a 20 percent or greater bandwidth is considered broadband, whereas an octave or more is typical below 100 MHz. As the state-of-the-art advances, it is becoming more common to achieve octave-bandwidth or wider amplifiers well into the microwave region using bipolar and FET active devices. Hybrid-integrated-circuit techniques and new monolithic techniques eliminate many undesired package and bonding parasitics which can limit broadband amplifier performance. Additionally, distributed amplifiers, and other approaches which use multiple devices, have become more economical with increasing levels of integration. It has become uncommon to use tube devices for new amplifier designs. Solid-state devices have replaced tubes in most amplifier applications because of superior long-term reliability and lower noise figures. In the following discussion both field-effect and bipolar transistor notations appear for generality.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.48

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.48

AMPLIFIERS AND OSCILLATORS

FIGURE 11.3.10 RC-coupled stages: (a) field-effect transistor form; (b) bipolar junction transistor form.

Low-, Mid-, and High-Frequency Performance Consider the basic common-source and common-emitter-broadband RC coupled configurations shown in Fig. 11.3.10. Simplified low-frequency small-signal equivalent circuits are shown in Fig. 11.3.11. The voltage gain of the FET amplifier stage under the condition that all reactances are negligibly small is the midband value (at frequency f ) ( Amid ) FET =

− gm ≈ − gm RL 1/rds + 1/ RL + 1/ Rg

FIGURE 11.3.11 Equivalent circuits of the stages shown in Fig. 11.3.10: (a) FET form; (b) bipolar form.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.49

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.49

If the low-frequency effects are included, this becomes ( Alow )FET =

  − gm RL 1 + 1/ jω RS CS − gm RL for RS = 0  = 1 + 1/ jω RgCg 1 + (1 + gm RS )/ jω CS RS  1 + 1/ jω RgCg 

The low-frequency cutoff is because principally of two basic time constants, RgCg and RSCS. For CS values large enough for the time constant to be much longer than that associated with Cg, a low-frequency cutoff or half-power point can be determined as ( f1 )FET =

1 2π Cg [ Rg + rds RL /(rds + RL )]

If the coupling capacitor is very large, the low-frequency cutoff is a result of CS. The slope of the actual rolloff is a function of the relative effect of these two time constants. Therefore, the design of coupling and bypass circuits to achieve very-low-frequency response requires very large values of capacitance. Similarly, for a bipolar transistor stage, the midband current gain can be determined as ( Amid ) BJT =

−α rc RL  R r (1 − α )  [ RL + rc (1 − α )]  Rie + L c  R L + rc (1 − α )  

Rie = rb +

where



RL −α 1 − α RL + Rie

re 1− α

When low-frequency effects are included, this becomes ( Alow ) BJT ≈

RL −α 1 − α RL + Rie − j/ωCg

for RL  rc (1 − α )

and ( f1 ) BJT =

1 2π Cg

1 1 1 ≈ RL rc (1 − α ) 2π Cg Rie + RL Rie + RL + rc (1 − α )

If the ratio of low- to midfrequency voltage or current gain is taken, its reactive term goes to unity at f = f1, that is, the cutoff frequency. Alow 1 = Amid 1 − j ( f1 / f )

φlow = tan −1

f1 f

These quantities are plotted in Fig. 11.3.12 for a single time-constant rolloff. Caution should be exercised in assuming that reactances between input and output terminals are negligible. Although this is generally the case, gain multiplicative effects can result in input or output reactance values greater than the values assumed above, e.g., by the Miller effect: Cin = Cgs + Cgd(1 + gmR¢L)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.50

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.50

AMPLIFIERS AND OSCILLATORS

FIGURE 11.3.13 Gain and phase-shift curves at high frequencies.

FIGURE 11.3.12 Gain and phase-shift curves at low frequencies.

Typically, the midfrequency gain equation can be used for frequencies above that at which Xc = Rg/10 below that at which Xcg = 10RgRL/(Rg + RL) (for the FET circuit). If the frequency is increased further, a point is reached where the shunt reactances are no longer high with respect to the circuit resistances. At this point the coupling and bypass capacitors can be neglected. The highfrequency gain can be determined as ( Ahigh )FET =

− gm 1/rds + 1/ RL + jω CL

where CL is the effective total interstage shunt capacitance. ( Ahigh ) BJT ≈

−α 1− α

1  1 jωCc  1 + Rie  +  1 R −α   L

for RL  rc (1− α )

The ratio of high- to midfrequency gains can be taken and upper cutoff frequencies determined  Ahigh     Amid 

= FET

( f2 )FET =  Ahigh     Amid 

= BJT

( f2 ) BJT ≈

1 1 + jωCL 1 2π CL

1 (1/ rds ) + (1/ RL ) + (1/ Rg )

 1 1 1  + +   rds RL Rg 

1 jωCcrc RL Rie 1+ Rie [ RL + rc (1 − α )] + RL rc (1 − α ) 1− α  1 1  +   2π Cc  RL Rie 

and φhigh = −tan −1 ( f / f2 )

Dimensionless curves for these gain ratios and phase responses are plotted in Fig. 11.3.13. Compensation Techniques To extend the cutoff frequencies f1 and f2 to lower or higher values, respectively, compensation techniques can be used. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.51

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.51

Figure 11.3.14 illustrates two techniques for low-frequency compensation. If the condition RgCg = CXRXRL/(RX + RL) is fulfilled (in circuit a or b), the gain relative to the midband gain is Alow 1 = and Amid 1 − j (1/ ω RgCg )[ RL /( RL + RX )]

f1 =

RL 1 2π RgCg RL + RX

Hence, improved low-frequency response is obtained with increased values of RX. This value is related to RL and restricted by active-device operating considerations. Also, RL is dependent on the desired high-frequency response. It can be shown that equality of time constants RLCX = RgCg will produce zero phase shift in the coupling circuit (for RX > 1/wCx). The circuit shown in Fig. 11.3.14c is more critical. It is used with element ratios set to RL/RX = Rg/Rc and CX /Cg = Rc/RX. Various compensation circuits are also available for high-frequency-response extension. Two of the most common, the series- and shunt-compensation cases, are shown in Fig. 11.3.15. The high-frequency-gain expressions of these configurations can be written Ahigh Amid

=

1 + a1 ( f / f2 )2 + a2 ( f / f2 )4 +  1 + b1 ( f / f2 )2 + b2 ( f / f2 )4 + b3 ( f / f2 )6 + 

The coefficients of the terms decrease rapidly for the higher-order terms, so that if a1 = b1, a2 = b2, etc., to as high an order of the f / f2 ratio as possible, a maximally flat response curve is obtained. For the phase response, df/dw can also be expressed as a ratio of two polynomials in f / f2, and a similar procedure can be followed. A flat time-delay curve results. Unfortunately, the sets of conditions for flat gain and linear phase are different, and compromise values must be used.

FIGURE 11.3.14 Low-frequency compensation networks: (a) bipolar transistor version; (b), (c) FET versions.

Ahigh Amid

=

Shunt Compensation. The high-frequency gain and time delay for the shunt-compensated stage are

1 + α 2 ( f / f2 ) 2 1 + (1 − 2α )( f / f2 )2 + α 2 ( f / f2 )4

φ = −tan −1

2    f  f  1− α +   α 2   f2   f2   

where a = L/Cg R2L and Rg >> RL. A case when Rg cannot be assumed to be high, such as the input of a following bipolar transistor stage, is considerably more complex, depending on the transistor equivalent circuit used. This is particularly true when operating near the transistor fT and/or above the VHF band. Series Compensation. In the series-compensated circuit, the ratio of Cs to Cg is an additional parameter. If this can be optimized, the circuit performance is better than in the shunt-compensated case. Typically, however, control of this parameter is not available due to physical and active-device constraints. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.52

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.52

AMPLIFIERS AND OSCILLATORS

FIGURE 11.3.15 High-frequency compensation schemes: (a) shunt; (b) series; (c) shunt-series.

These two basic techniques can be combined to improve the response at the expense of complexity. The shunt-series-compensation case and the so-called “modified” case are examples. The latter involves a capacitance added in shunt with the inductance L or placing L between Cs and RL. For the modified-shunt case, the added capacitance Cc permits an additional degree of freedom, and associated parameter, k1 = Cc/Cs. Other circuit variations exist for specific broadband compensation requirements. Phase compensation, for example, may be necessary as a result of cascading a number of minimum-phase circuits designed for flat frequency response. Circuits such as the lattice and bridged-T can be used to alter the system response by reducing the overshoot without severely increasing the overall rise time.

Cascaded Broadband Stages When an amplifier is made up of n cascaded RC stages, not necessarily identical, the overall gain An can be written  An 1 = Amid  1 + ( f / fa )2

1/ 2

 1   1 + ( f / fb )2

1/ 2

1/ 2

1  1 + ( f / fn ) 2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.53

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS

11.53

where fa, fb,…, fn are the f1 or f2 values for the respective stages, depending on whether the overall low- or highfrequency gain ratio is being determined. The phase angle is the sum of the individual phase angles. If the stages are identical, fa = fb = fx for all, and An 1 = Amid 1 + ( f / fx )2

n/ 2

Stagger Peaking. In stagger tuning a number of individual bandpass amplifier stages are cascaded with frequencies skewed according to some predetermined criteria. The most straightforward is with the center frequencies adjusted so that the f2 of one stage concludes with the f1 of the succeeding stage, and so forth. The overall gain bandwidth then becomes N

(GBW)n = ∑ (GBW)n n =1

A significant simplifying criterion of this technique is stage isolation. Isolation, in transistor stages particularly, is not generally high, except at low frequencies. Hence the overall design equations and subsequent overall alignment can be significantly complicated because of the interactions. Complex device models and computer-aided design greatly facilitate the implementation of this type of compensation. The simple shuntcompensated stage has found extensive use in stagger-tuned pulse-amplifier applications. Transient Response Time-domain analysis is particularly useful for broadband applications. Extensive theoretical studies have been made of the separate effects of nonlinearities of amplitude and phase response. These effects can be investigated starting with a normalized low-pass response function. A( jw)/A(0) = exp (amwm − jbnwn) where a and m are constants describing amplitude-frequency response and b and n are constants describing phase-frequency response. Figure 11.3.16 illustrates the time response to an impulse and a unit-step forcing function for various values of m, with n = 0. Rapid change of amplitude with frequency (large m) results in

FIGURE 11.3.16 Transient responses to unit impulse (left) and unit step (right) for various values of m.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_11.qxd

10/28/04

10:54 AM

Page 11.54

RADIO-FREQUENCY AMPLIFIERS AND OSCILLATORS 11.54

AMPLIFIERS AND OSCILLATORS

overshoot. Nonzero, but linear, phase-frequency characteristics (n = 1) result in a delay of these responses, without introducing distortion. Further increase in n results in increased ringing and asymmetry of the time function. An empirical relationship between rise time (10 to 90 percent) and bandwidth (3 dB) can be expressed as tr · BW = K where K varies from about 0.35 for circuits with little or no overshoot to 0.45 for circuits with about 5 percent overshoot. K is 0.51 for the ideal rectangular low-pass response with 9 percent overshoot; for the Gaussian amplitude response with no overshoot, K = 0.41. The effect on rise time of cascading a number of networks n depends on the individual network pole-zero configurations. Some general rules follow. 1. For individual circuits having little or no overshoot, the overall rise time is trt = (tr21 + tr22 + tr23 + )1/ 2 2. If tr1 = tr2 = trn, trt = 1.1 ntr1 3. For individual stage overshoots of 5 to 10 percent, total overshoot increases as n. 4. For circuits with low overshoot (∼1 percent), the total overshoot is essentially that of one stage.

FIGURE 11.3.17 Response to a unit step of n capacitively coupled stages of the same time constant.

The effect of insufficient low-frequency response of an amplifier is sag of the time response. A small amount of sag ( f f

for all f

The cutoff frequency ff is adjusted to select the output spectral lobe about zero fc < ff < 1/T − fc and will fall in the guard band between lobes. That portion of filter output R( f ) selected is R0( f ) = (1/T )F( f ) exp (–j2paf )

(3)

r0(t) = (1/T)f(t − a)

(4)

that will inverse transform as

which is identical with the signal function, with the amplitude reduced by a scale factor and function shifted by a seconds. If a = 0, signifying no delay, the filter is termed a “cardinal data hold”; otherwise, it is an “ideal low-pass filter.” Unfortunately, these filters cannot be realized in practice, since they are required to respond before they are excited. Examination of Fig. 12.3.1b gives rise to the sampling theorem accredited to Shannon and/or Nyquist, which states that when a continuous time function with band-limited spectrum – fc < f < fc is sampled at twice the highest frequency, fs = 2fc, the original time function can be recovered. This corresponds to the point where the sampling frequency fs = 1/T is decreased so that the spectral lobes of Fig. 12.3.1b are just touching. To decrease fs beyond the value of 2fc would cause spectral overlap and make recovery with an ideal filter impossible. A more general form of the sampling theorem states that any 2f independent samples per second will

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.30

PULSE MODULATION/DEMODULATION 12.30

MODULATORS, DEMODULATORS, AND CONVERTERS

completely describe a band-limited signal, thus removing the restriction of uniform sampling, as long as independent samples are used. In general, for a time-limited signal of T seconds, band-limited to fc Hz, only 2fcT samples are needed to specify the signal completely. In practice, the signal is not completely band-limited, so that it is common to allow for a greater separation of spectral lobes, called the guard band. This guard band is generated simply by sampling at greater than 2fc, as in the case for Fig. 12.3.1b. Although the actual tolerable overlap depends on the signal spectral slope, setting the sampling rate at about 3fc = fs is usually adequate to recover the signal. In practice, narrow but finite-width pulse trains are used in place of the idealized impulse sampling train.

PULSE-AMPLITUDE MODULATION Pulse-amplitude modulation is essentially a sampled-data type of encoding where the information is encoded into the amplitude of a train of finite-width pulses. The pulse train can be looked upon as the carrier in much the same way as the sine wave is for continuous-amplitude modulation. There is no improvement in signal-tonoise when using PAM, and furthermore, PAM is not considered wideband in the sense of FM or PTM. Thus PAM would correspond to continuous AM, while PTM corresponds to FM. Generally, PAM is used chiefly for time-multiplex systems employing a number of channels sampled, consistent with the sampling theorem. There are a number of ways of encoding information as the amplitude of a pulse train. They include both bipolar and unipolar pulse trains for both instantaneous or square-topped sampling and for exact or top sampling. In top sampling, the magnitude of the individual pulses follows the modulating signal during the pulse duration, while for square-topped sampling, the individual pulses assume a constant value, depending on the particular exact sampling point that occurs somewhere during the pulse time. These various waveforms are shown in Fig. 12.3.2. The top-modulation bipolar sampling case is shown in Fig. 12.3.2c; it is simply sampling with a finitepulse-width train. Carrying out the convolution yields RSTB ( f ) =

τ T





τn 

∑  sinc T  n = −∞ 

 n F f−  T 

(5)

The spectrum for top-modulation bipolar sampling, using a square-topped rectangular spectrum for the original signal spectrum, is shown in Fig. 12.3.3a. The signal spectrum repeats with a (sin x)/x scale factor determined by the sampling pulse width, with each repetition a replica of F( f ). Unipolar sampling can be implemented by adding a constant bias A to f(t), the signal, to produce f(t) + A, where A is large enough to keep the sum positive; that is, A > | f(t)|. Sampling the new sum signal by multiplication with the pulse train results in the unipolar top-modulated waveform of Fig. 12.3.2e. The spectrum is RSTU ( f ) =

τ T





τn   

n



n

∑  sinc T   F  f − T  + Aδ  f − T   n = −∞  



(6)

The delta-function part of the summation reduces to the spectrum function of the pulse train S( f ) RSTU ( f ) = AS ( f ) +

τ T





τn 

∑  sinc T  n = −∞ 

 n Ff −  T 

(7)

The resulting spectrum of top-modulation unipolar sampling is the same as with bipolar sampling plus the impulse spectrum of the sampling pulse train, as shown in Fig. 12.3.3b. For square-topped-modulation bipolar sampling, the time-domain result is rSSB(t) = rect (t/t) ∗ combT f(t)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(8)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.31

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.31

FIGURE 12.3.2 PAM waveforms: (a) modulation; (b) square-top sampling, bipole pulse train; (c) top sampling, bipole pulse train; (d) square-top sampling, unipolar pulse train; (e) top sampling, unipolar pulse train.

with spectrum function RSSB ( f ) =

∞  τ n (sinc f τ ) ∑ F  f −  T T   n = −∞

(9)

In this case, the signal spectrum is distorted by the sinc ft envelope, as shown in Fig. 12.3.2c. This frequency distortion is referred to as aperture effect and may be corrected by use of an equalizer sinc ft form, following the low-pass reconstruction filter. As in the previous case of unipolar sampling, the resulting spectrum for square-topped modulation will contain the pulse-train spectrum, as shown in Fig. 12.3.2d. The expression is RSSU ( f ) = AS ( f ) +

∞  τ n (sinc f τ ) ∑ F  f −  T T  n = −∞

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(10)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.32

PULSE MODULATION/DEMODULATION 12.32

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.3.3 PAM spectra: (a) top modulation, bipolar sampling; (b) top modulation, unipolar sampling; (c) square-top modulation, bipolar sampling; (d) square-top modulation, unipolar sampling.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.33

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.33

The signal information is generally recovered, in PAM systems, by use of a low-pass filter that acts on the reduced signal energy around zero frequency, as shown in Fig. 12.3.3.

PULSE-TIME, PULSE-POSITION, AND PULSE-WIDTH MODULATION In PTM the information is encoded into the time parameter instead of, for instance, the amplitude, as in PAM. There are two basic types of PTM: pulse-position modulation (PPM) and pulse-width modulation (PWM), also known as pulse-duration (PDM) or pulse-length (PLM) modulation. The PTM allows the power-driver circuitry to operate at saturation level, thus conserving power loss. Operating driver circuitry full on, full off, is especially important for heavy-duty high-load control applications, as well as for communication applications. In PPM the information is encoded into the time position of a narrow pulse, generally with respect to a reference pulse. The basic pulse width and amplitude are kept constant, while only the pulse position is changed, as shown in Fig. 12.3.4. There are three cases of PWM which are the modulation of the leading edge, trailing edge, or both edges, as displayed in Fig. 12.3.5. In this case the information is encoded into the width of the pulse, with the pulse amplitude and period held constant. The derivative relationship existing between PPM and PWM can be illustrated by consideration of trailing-edge PWM modulation. The pulses of PPM can be derived from the edges of trailing-edge PWM (Fig. 12.3.5b) by differentiation of the PWM signal and a sign change of the trailing-edge pulse. Pulse-position modulation is essentially the same as PWM, with the information-carrying variable edge replaced by a pulse. Thus, when that part of the signal power of PWM that carries no information is deleted, the result is PPM. Generally, in PTM systems a guard interval is necessary because of the pulse rise times and system responses. Thus 100 percent of the interpulse period cannot be used without considerable channel cross-talk because of pulse overlap. It is necessary to trade off crosstalk versus channel utilization at the system design level.

FIGURE 12.3.4 PPM time waveform.

FIGURE 12.3.5 PWM time waveforms: (a) leadingedge modulation; (b) trailing-edge modulation; (c) both-edge modulation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.34

PULSE MODULATION/DEMODULATION 12.34

MODULATORS, DEMODULATORS, AND CONVERTERS

Another consideration is that the information sampling rate cannot exceed the pulse repetition frequency and would be less for a single channel of a multiplexed system where channels are interwoven in time. Generation of PTM There are two basic methods of pulse-time modulation: (1) based on uniform sampling in which the pulsetime parameter is directly proportional to the modulating signal at uniformly sampled points and (2) in which there is some distortion of the pulse-time parameter because of the modulation process. Both methods of modulation are illustrated in Fig. 12.3.6 for PWM. Basically, PPM can be derived from trailing-edge PWM, as shown in Fig. 12.3.6c by use of an edge detector or differentiator and a standard narrow-pulse generator. In the uniform sampling case for PWM of Fig. 12.3.6a, the modulating signal is sampled uniformly in time and the special PAM derived by a sample-and-hold circuit as shown in Fig. 12.3.7a. This PAM signal provides a pedestal for each of the three types of sawtooth waveforms producing leading, trailing, or double-edge PWM, as shown in Fig. 12.3.7c, e, and g, respectively. The uniform sampled PPM is shown in Fig. 12.3.7h, as derived from the trailing-edge modulation of g. Nonuniformly sampled modulation, termed natural sampling by some authors, is shown in Fig. 12.3.8, and results from the method of Fig. 12.3.8b, where the sawtooth is added directly to the modulating signal. In this case the modulating waveform influences the time when the samples are actually taken. This distortion is small when the modulating-amplitude change is small during the interpulse period T. The distortion is caused by the modulating signal distorting the sawtooth wave-form when they are added, as indicated in Fig. 12.3.6b. The information in the PPM waveform is similarly distorted because it is derived from the PWM waveform, as shown in Fig. 12.3.7h.

FIGURE 12.3.6 PTM generation: (a) pulse-width-modulation generation, uniform sampling; (b) pulse-width-modulation generation, nonuniform sampling; (c) pulseposition-modulation generation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.35

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.35

FIGURE 12.3.7 Pulse-time modulation, uniform sampling: (a) modulating signal and sample-and-hold waveform; (b) sawtooth added to sample-and-hold waveform; (c) leading-edge modulation; (d) sawtooth added to sample-and-hold waveform; (e) double-edge modulation; ( f ) sawtooth added to sample-and-hold waveform; (g) trailing-edge modulation; (h) pulse-position modulation (reference pulse dotted).

PULSE-TIME MODULATION SPECTRA The spectra are smeared in general, for most modulating signals, and are difficult to derive; however, it is possible to get some idea of what happens to the spectra with modulation by considering a sinusoidal modulation of form A cos 2p fst

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(11)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.36

PULSE MODULATION/DEMODULATION 12.36

MODULATORS, DEMODULATORS, AND CONVERTERS

The amplitude A < T/2, where T is the interpulse period, assuming no guard band. For PPM with uniform sampling and unity pulse amplitude, the spectrum is given by x (t ) =



τ 2τ + T T +

2τ T

2τ + T

∑ (sinc mf0 )J0 (2π Amf0 ) cos 2π mf0t

m =1





∑ sinc (nfs )Jn (2π Anfs ) cos  2π nfst −

n =1 ∞

nπ  2 







∑ ∑ sinc (mf0 + nfs )Jn [2π A(mf0 + nfs )] cos 2π (mf0 + nfs )t −

m = 1 n = 1

nπ  2 

 nπ   + sinc (nfs − mf0 ) J n [2π A(nfs − mf0 )] cos  2π (nfs − mf0 )t −  2   

(12)

where t = pulse width T = pulse period fs = modulation frequency Jn = Bessel function of first kind, nth order f0 = 1/T As is apparent, all the harmonics of the pulse-repetition frequency and the modulation frequency are present, as well as all possible sums and differences. The dc level is t/T, with the harmonics carrying the modulation. The pulse shape effects the line amplitudes as a sinc function, reducing the spectra for higher frequencies. The spectrum for PWM is similar to that of PPM, and for uniformly sampled trailing-edge sinusoidal modulation is given by x (t ) =

1 1 + 2 πT



1

∑ mf

m =1



+

1 πT

m = 1 mf0

+

1 πT

∑ nf

∑ ∞

n =1

1 1 s

0

  π cos  2π mf0t + (2m − 1)  2  

 π J 0 (2π Amf0 ) cos  2π mf0t −   2

 π  J n (2π Anfs ) cos  2π nfs t − (n + 1)  2  

 1 1 J [2π A(mf0 + nfs )]] cos +  ∑ π T m = 1  mf0 + nfs n ∞

n =1

+

 π  2π (mf0 + nfs )t − (n + 1) 2   

 1 π   J n [2π A(nfs − mf0 )] cos  2π (nfs − mf0 )t − (n + 1)   nfs − mf0 2   

(13)

The same comments apply for PWM as for PPM. A more compact form is given for PPM and PWM, respectively, as x (t ) =

1 ∞ ∑ (− j)n Jn [2π A(mf0 + nfs )]P(mf0 + nfs ) exp [ j 2π (mf0 + nfs )t ] T m=∞ n=∞

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(14)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.37

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.37

FIGURE 12.3.8 Pulse-time modulation, nonuniform sampling; (a) modulating signal; (b) sawtooth added to modulation; (c) leading-edge modulation; (d) sawtooth added to modulation; (e) double-edge modulation; ( f ) sawtooth added to modulation; (g) trailing-edge modulation; (h) pulse-position modulation.

where P( f ) is the Fourier transform of the pulse shape p(t), and x (t ) =

1 1 ∞ 2 m−1 e j 2π mf 0 t 1 + ∑ j 2π mf − T 2 T m = −∞ 0 m≠0

(− j ) n + 1





m = −∞ n =−∞ |m| + |n| ≠ 0

J n [2π A(mf0 + nfs )] exp [ j 2π (mf0 + nfs )t ] 2π (mf0 + nfs )

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(15)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.38

PULSE MODULATION/DEMODULATION 12.38

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.3.9 Pulse-time demodulation: (a) PWM demodulation; (b) PPM to PWM for demodulation.

DEMODULATION OF PTM Demodulation of PWM or PPM can be accomplished by low-pass filtering if the modulation is small compared with the impulse period. However, in general, it is best to demodulate on a pulse-to-pulse basis that usually requires some form of synchronization with the pulses. The distortion introduced by nonuniform sampling cannot be eliminated and will be present in the demodulated waveform. However, if the modulation is small compared with the interpulse period T, the distortion will be minimized. To demodulate PWM each pulse can be integrated and the maximum value sampled and held and low-passfiltered, as shown in Fig. 12.3.9a. To sample and reset the integrator, it is necessary to derive sync from the PWM waveform, in this case trailing-edge-modulated. Generally, PPM is demodulated by conversion to PWM and then demodulated as PWM. Although in some demodulation schemes the actual PWM waveform may not exist as such, the general demodulation scheme is the same. PPM can be converted to PWM by the configuration of Fig. 12.3.9b. The PPM signal is applied to an amplitude threshold, usually termed a slicer, that rejects noise except near the pulses. The pulses are applied to a flip-flop synchronized to one particular state by the reference pulse, and it generates the PWM as its output.

PULSE FREQUENCY MODULATION In PFM the information is contained in the frequency of the pulse train, which is composed of narrow pulses. The highest frequency possible ideally occurs when there is no more interpulse spacing left for finite-width pulses. This frequency, given by 1/t, where t is the pulse width, will not be achieved in practice, owing to the pulse rise time. The lowest frequency is determined by the modulator, usually a voltage-controlled oscillator

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.39

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.39

FIGURE 12.3.10 PFM modulation.

(VCO), in which in practice a 100:1 ratio of high to low frequency is easily achievable. Examination of Fig. 12.3.10 indicates why PFM is used mostly for control purposes rather than communications. The wide variation and uncertainty of pulse position do not lend themselves to time multiplexing, which requires the interweaving of channels in time. Since one of the chief motivations of pulse modulation in communication systems is to be able to time-multiplex a number of channels, PFM is not used. On the other hand, PFM is a good choice for on-off control applications, especially where fine control is required. A classic example of PFM control is for the attitude control of near-earth satellites that have on-off gas thrusters where a very close approximation to a linear system response is achievable.

Generation of PFM Basically, PFM is generated by modulation of a VCO as shown in Fig. 12.3.11a. A constant reference voltage is added to the modulation so that the frequency can swing above and below the reference-determined value. For control applications it is usually required that the frequency follow the magnitude of the modulation, its sign determining which actuators are to be turned on, as shown in Fig. 12.3.11b.

FIGURE 12.3.11 Generation of PFM: (a) PFM modulation; (b) PFM for control.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.40

PULSE MODULATION/DEMODULATION 12.40

MODULATORS, DEMODULATORS, AND CONVERTERS

PULSE-CODE MODULATION In PCM the signal is encoded into a steam of digits. This differs from the other forms of pulse modulation by requiring that the sample values of the signal be quantized into a number of levels and subsequently coded as a series of pulses for transmission. By selecting enough levels, the quantized signal can be made to approximate closely the original continuous signal at the expense of transmitting more bits per sample. The PCM scheme lends itself readily to time multiplexing of channels and will allow widely different types of signals; however, synchronization is strictly required. This synchronization of the system can be on a single-sample or code-group basis. The synchronizing signal is most likely inserted with a group of samples from different channels, on a frame or subframe basis to conserve space. The motivation behind modern PCM is that improved implementation techniques of solid-state circuitry allow extremely fast quantization of samples and translation to complex codes with reasonable equipment constraints. PCM is an attractive way to trade bandwidth for signal-to-noise and has the additional advantage of transmission through regenerative repeaters with a signal-to-noise ratio that is substantially independent of the number of repeaters. The only requirement is that the noise, interference, and other disturbances be less than one-half a quantum step at each repeater. Also, systems can be designed that have error-detecting and error-correcting features.

PCM CODING AND DECODING Coding is the generation of a PCM waveform from an input signal, and decoding is the reverse process. There are many ways to code and many code groups to use: hence standardization is necessary when more than one user is considered. Each sample value of the signal waveform is quantized and represented to sufficient accuracy by an appropriate code character. Each code character is composed of a specified number of code elements. The code elements can be chosen as two-level, or binary; three-level, or ternary; or n-ary. However, general practice is to use binary, since it is not affected as much by interference introduced by the required increased bandwidth. An example of binary coding is shown in Fig. 12.3.12 for 3-bit or eight levels of quantization. Each code group is composed of three pulses, with the pulse trains shown for on-off pulses in Fig. 12.3.12b and bipolar pulses in Fig. 12.3.12c. A generic diagram of a complete system is shown in Fig. 12.3.13. The recovered signal is a delayed copy of the input signal degraded by noise because of sources such as sampling, quantization, and interference. For this type of system to be efficient, both sending and receiving terminals must be synchronized. The synchronism is

FIGURE 12.3.12 Binary pulse coding: (a) quantized samples; (b) on-off coded pulses; (c) bipolar coded pulses.

FIGURE 12.3.13 Basic operations of a PCM system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.41

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.41

required to be monitored continuously and be capable of establishing initial synchronism when the system is out of frame. The synchronization is usually accomplished by use of special sync pulses that establish frame, subframe, or word sync. There are three basic ways to code, namely, feedback and subtraction, pulse counting, and parallel comparison. In feedback subtraction the sample value is compared with the most significant code-element value and that value subtracted from the sample value if the element value is less. This process of comparison and subtraction is repeated for each code-element value down to the least significant bit. At each subtraction the appropriate code element or bit is selected to complete the coding. In pulse counting a gate is established by using the PWM pulse corresponding to a sample value. Clock pulses are gated using the PWM gate and are connected in a counter. The output of a decoding network attached to the counter is read out as the PCM. Parallel comparison is the fastest method since the sampled value is applied to a number of different threshold values. The thresholders are read out as the PCM.

SYSTEM CONSIDERATIONS FOR PCM Quantization introduces an irremovable error into the system, referred to as quantization noise. This kind of noise is characterized by the fact that its magnitude is always less than one-half a quantum step, and it can be treated as uniformly distributed additive noise with zero mean value and rms value equal to 1/√12 times the total height of a quantum step. When the ratio of signal power to quantization noise power at the quantizer output is used as a measure of fidelity the improvement with quantizer levels is as shown in Fig. 12.3.14 for different kinds of signals. In general, using an n-ary code with m pulses allows transmission of nm values. For the binary code this reduces the 2m values which approximate the signal to 1 part in 2m – 1 levels. Encoding into pulse and minus pulses, assuming either pulse is equally likely, results in an average power of A2/4, which is half the on-off power of A2/2, where the total pulse amplitude, peak to peak, is A. The channel capacity for a system sampled at the Nyquist rate of 2fm and quantized into s levels is C = 2fm log2 s (bits/s)

(16)

or for m pulses of n values each C = mfm log2 n2

FIGURE 12.3.14 PCM signal-to-noise improvement with number of quantization levels.

(bits/s)

(17)

Since the encoding process squeezes one sample into m pulses, the pulse widths are effectively reduced by l /m; thus the transmission bandwidth is increased by a factor of m, or B = mfm. The maximum possible ideal rate of transmission of binary bits is C = B log2 (1 + S/N) (bits/s)

(18)

according to Shannon. For a system sampled at the Nyquist rate, quantized to Ks per level and using the plus and minus pulses, the channel capacity is C = B log2(1 + 12S/K2N) (bits/s) N = s2 where S is the average power over large time interval and s is the rms noise voltage at decoder input.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(19)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.42

PULSE MODULATION/DEMODULATION 12.42

MODULATORS, DEMODULATORS, AND CONVERTERS

DELTA MODULATION Delta modulation (DM) is basically a one-digit PCM system where the analog waveform has been encoded in a differential form. In contrast to the use of n digits in PCM, simple DM uses only one digit to indicate the changes in the sample values. This is equivalent to sending an approximation to the signal derivative. At the receiver the pulses are integrated to obtain the original signal. Although DM can be simply implemented in circuitry, it requires a sampling rate much higher than the Nyquist rate of 2fm and a wider bandwidth than a comparable PCM system. Most of the other characteristics of PCM apply to DM. Delta modulation differs from differential PCM in which the difference in successive signal samples is transmitted. In DM only 1 bit is used to express and transmit the difference. Thus DM transmits the sign of successive slopes.

Coding and Decoding DM There are a number of coding and decoding variations in DM, such as single-integration, double-integration, mixed-integration, delta-sigma, and high-information DM (HIDM). In addition, companding the signal which is compressing the signal at transmission and expanding it at reception is also used to extend the limited dynamic range. The simple single-integration DM of the coding-decoding scheme is shown in Fig. 12.3.15. In the encoder the modulator produces positive pulses when the sign of the difference signal (t) is positive and negative pulses otherwise; and the output pulse train is integrated and compared with the input signal to provide an error signal (t), thus closing the encode feedback loop. At the receiver the pulse train is integrated and filtered to produce a delayed approximation to the signal, as shown in Fig. 12.3.16. The actual circuit implementation with operational amplifiers and logic circuits is very simple. By changing to a double-integration network in the decoder, a smoother replica of the signal is provided. This decoder has the disadvantage, however, of not recognizing changes in the slope of the signal. This gave rise to a scheme to encode differences in slope instead of amplitude, leading to coders with double integration; however, systems of this type are marginally stable and can oscillate under certain conditions. Waveforms of a double-integrating delta coder are shown in Fig. 12.3.17. Single and double integration can be combined to give improved performance while avoiding the stability problem. These mixed systems are often referred to in the literature as delta modulators with double integration. A comparison of waveforms is shown in Fig. 12.3.18.

FIGURE 12.3.15 Basic coding-decoding diagram for DM.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.43

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

12.43

FIGURE 12.3.16 Delta-modulation waveforms using single integration.

System Considerations for DM The synthesized waveform can change only one level each clock pulse; thus DM overloads when the slope of the signal is large. The maximum signal power will depend on the type of signal, since the greatest slope that can be reproduced is the integration of one level in one pulse period. For a sine wave of frequency f, the maximumamplitude signal is Amax = fss/2p

(20)

where fs is the sampling frequency and s is one quantum step. It has been observed that a DM system will transmit a speech signal without overloading if the amplitude of the signal does not exceed the maximum permissible amplitude of an 800-Hz sine wave. The DM

FIGURE 12.3.17 Waveforms for delta coder with double integration.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.44

PULSE MODULATION/DEMODULATION 12.44

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.3.18 Waveforms for various integrating systems.

coder overload characteristic is shown in Fig. 12.3.19 along with the spectrum of a human voice. Notice that they decrease in frequency together, indicating that DM can be used effectively with speech transmission. Generally speaking, transmission of speech is the chief application of DM, although various modifications and improvements are being studied to extend DM to higher frequencies and transmission of the lost dc component. Among these techniques is delta-sigma modulation, where the signal is integrated and compared with an integrated approximation to form the error signal similar to (t) of Fig. 12.3.15. The decoding is accomplished with a low-pass filter and requires no integration. The signal-to-quantization noise ratio for single-integration DM is given by S /N = 0.2 fs3/ 2 / f f01/ 2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(21)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.45

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

FIGURE 12.3.19 Spectrum of the human voice compared with delta-coder overload level.

12.45

FIGURE 12.3.20 Signal-to-noise ratio for DM and PCM.

where fs = sampling frequency f = signal frequency f0 = signal bandwidth For double or mixed DM S /N = 0.026 fs5/ 2 /f f03/ 2

(22)

A comparison of signal-to-noise ratio (SNR) for DM and PCM is shown in Fig. 12.3.20, along with an experimental DM system for voice application. Note that DM at 40 kbits/s sampling rate is equal in performance with a 5-bit PCM system. Extended-Range DM A system termed high-information DM (HIDM, developed by M. R. Winkler in 1963) falls in the category of companded systems and encodes more information in the binary sequence than normal DM. Basically, the method doubles the size of the quantization step when two identical, consecutive binary values appear and takes one-half of the step after each transition of the binary train. The HIDM system is capable of reproducing the signal with smaller quantization and overload errors. This technique also increases the dynamic range. The response of HIDM compared with that of DM is shown in Fig. 12.3.21. Implementation of HIDM is similar to that of DM, as shown in Figs. 12.3.22 and 12.3.23, with the difference only in the demodulator. The flip-flop of Fig. 12.3.23 changes state on the polarity of the input pulses. While the impulse generator initializes the experimental generators each pulse time, the flip-flop selects either the positive or negative one. The integrator adds and smooths the exponential waveforms to form the FIGURE 12.3.21 Step response for a high-information delta modulation. output signal. The scheme has a dynamic range with slope limiting of 11.1 levels per pulse period, which is much greater than DM and is equivalent to a 7-bit linear-quantized PDM system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.46

PULSE MODULATION/DEMODULATION 12.46

MODULATORS, DEMODULATORS, AND CONVERTERS

Clock pulses Input Difference circuit

Modulator

Demodulator

Output

Demodulator Encoder

Decoder

FIGURE 12.3.22 Block diagram of HIDM system.

FIGURE 12.3.23 Block diagram of HIDM demodulator.

DIGITAL MODULATION Digital modulation is concerned with the transmission of a binary pulse train over some medium. The output of, say, a PCM coder would be used to modulate a carrier for transmission. In PCM systems, for instance, the high-quality reproduction of the analog signal is a function only of the probability of correct reception of the pulse sequences. Thus the measure of digital modulation is the probability of error resulting from the digital modulation. The three basic types of digital modulation, amplitude-shift keying (ASK), frequency-shift keying (FSK), and phase-shift (PSK), are treated below.

AMPLITUDE-SHIFT KEYING In ASK the carrier amplitude is turned on or off, generating the waveform of Fig. 12.3.24 for rectangular pulses. Pulse shaping, such as raised cosine, is sometimes used to conserve bandwidth. The elements of a binary ASK receiver are shown in Fig. 12.3.25. The detection can be either coherent or noncoherent; however, if the added complexity of coherent methods is to be applied, a higher performance can be achieved by using one of the other methods of digital modulation. The error rate of ASK with noncoherent detection is given in Fig. 12.3.26. Note that the curves approach constant values of error for high signal-to-noise ratios. FIGURE 12.3.24 ASK modulation. The probability of error for the coherent detection scheme of Fig. 12.3.25c is shown in Fig. 12.3.27. The coherent-detection operation is equivalent to bandpass filtering of the received signal plus noise, followed by synchronous detection, as shown. At the optimum threshold shown in Fig. 12.3.27, the probability of error of marks and spaces is the same. The curves also tend toward a constant false-alarm rate, as in the noncoherent case.

FREQUENCY-SHIFT KEYING In FSK the frequency is shifted rapidly between one of two frequencies. Generally, two filters are used in favor of a conventional FM detector to discriminate between the marks and spaces, as illustrated in Fig. 12.3.28. As with ASK, either noncoherent or coherent detection can be used, although in practice coherent detection is not often used. This is because it is just as easy to use PSK with coherent detection and achieve superior performance.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.47

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

FIGURE 12.3.25 Elements of a binary digital receiver: (a) elements of a simple receiver; (b) noncoherent (envelope) detector; (c) coherent (synchronous) detector.

FIGURE 12.3.26 Error rate for on-off keying, noncoherent detection.

FIGURE 12.3.27 Error-rate for on-off keying, coherent detection.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

12.47

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.48

PULSE MODULATION/DEMODULATION 12.48

MODULATORS, DEMODULATORS, AND CONVERTERS

In the noncoherent FSK system shown in Fig. 12.3.29a, the largest of the output of the two envelope detectors determines the mark-space decision. Using this system results in the curve for noncoherent FSK in Fig. 12.3.30. Comparison of the noncoherent FSK error with that of the noncoherent ASK results in the conclusion that both achieve an equivalent error rate at the same average SNR at low error rates. FSK requires twice FIGURE 12.3.28 FSK waveform, rectangular pulses. the bandwidth of ASK because of the use of two tones. In ASK, in order to achieve this performance, it is required to optimize the detection threshold at each SNR. The FSK system threshold is independent of SNR, and thus is preferred in practical systems where fading is encountered. By synchronous detection of FSK (Fig. 12.3.29b) is meant the availability of an exact replica of each possible transmission at the receiver. The coherent-detection process has the effect of rejecting a portion of the bandpass noise. Coherent FSK involves the same difficulties as phase-shift keying but achieves poorer performance. Also, coherent FSK is significantly advantageous over noncoherent FSK only at high error rates. The probability of error is shown in Fig. 12.3.30.

PHASE-SHIFT KEYING Phase-shift keying is optimum in the minimum-error-rate sense from a decision-theory point of view. The PSK of a constant-amplitude carrier is shown in Fig. 12.3.31, where the two states are represented by a phase difference of p rad. Thus PSK has the form of a sequence of plus and minus rectangular pulses of a continuous sinusoidal carrier. It can be generated by double-sideband suppressed-carrier modulation by a bipolar rectangular waveform or by direct phase modulation. It is also possible to phase-modulate more complex signals than a sinusoid. There is no performance difference in binary PSK between the coherent detector and the normal phase detector, both of which are shown in Fig. 12.3.32. Reference to Fig. 12.3.32 shows that there is a 3-dB design advantage for ideal coherent PSK over ideal coherent FSK, with about the same equipment requirements. Practically, PSK can suffer if very much phase error ∆f is present in the system, since the signal is reduced by cos ∆f. This phase error can be introduced by relative drifts in the master oscillators at transmitter or receiver

FIGURE 12.3.29 Dual-filter detection of binary FSK signals: (a) noncoherent detection tone f1 signaled; (b) coherent detection tone f1 signaled.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.49

PULSE MODULATION/DEMODULATION PULSE MODULATION/DEMODULATION

FIGURE 12.3.30 Error rates for several binary systems.

12.49

FIGURE 12.3.31 PSK signal, rectangular pulses.

or be a result of phase drift or fluctuation in the propagation path. In most cases this phase error can be compensated at the expense of requiring long-term smoothing. An alternative to PSK is differential phase-shift keying (DPSK), where it is required that there be enough stability in the oscillators and transmission path to allow negligible phase change from one information pulse to the next. Information is encoded differentially in terms of phase change between two successive pulses. For instance, if the phase remains the same from one pulse to the next (0° phase shift), a mark would be indicated; however, a phase shift of p from the previous pulse to the next would indicate a space. A coherent detector is still required where one input is the current pulse with the other input the previous pulse. The probability of error is shown in Fig. 12.3.30. At all error rates DPSK requires 3 dB less SNR than noncoherent FSK for the same error rate. Also, at high SNR, DPSK performs almost as well as ideal coherent PSK at the same keying rate and power level.

FIGURE 12.3.32 Two detection schemes for ideal coherent PSK: (a) phase detection; (b) coherent detection.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.50

MICROWAVE AMPLIFIERS AND OSCILLATORS

CHAPTER 12.4

SPREAD-SPECTRUM MODULATION Myron D. Egtvedt

SPREAD-SIGNAL MODULATION In a receiver designed exactly for a specified set of possible transmitted waveforms (in the presence of white noise and in the absence of such propagation defects as multipath and dispersion), the performance of a matched filter or cross-correlation detector depends only on the ratio of signal energy to noise power density E/no, where E is the received energy in one information symbol and no/2 is the rf noise density at the receiver input. Since signal bandwidth has no effect on performance in white noise, it is interesting to examine the effect of spreading the signal bandwidth in situations involving jamming, message and traffic-density security, and transmission security. Other applications include random-multiple-access communication channels, multipath propagation analysis, and ranging. The information-symbol waveform can be characterized by its time-bandwidth (TW) product. Consider a binary system with the information symbol defined as a bit (of time duration T ), while the fundamental component of the binary waveform is called a chip. For this direct-sequence system, the ratio (chips per bit) is equal to the TW product. An additional requirement on the symbol waveforms is that their cross-correlation with each other and the noise or extraneous signals be minimal. Spread-spectrum systems occupy a signal bandwidth much larger (>10) than the information bandwidth, while the conventional systems have a TW of well under 10. FM with a high modulation index might slightly exceed 10 but is not optimally detectable and has a processing gain only above a predetection signal-to-noise threshold.

NOMENCLATURE OF SECURE SYSTEMS While terminology is not subject to rigorous definition, the following terms apply to the following material: Security and privacy. Relate to the protection of the signal from an unauthorized receiver. They are differentiated by the sophistication required. Privacy protects against a casual listener with little or no analytical equipment, while security implies an interceptor familiar with the principles and using an analytical approach to learn the key. Protection requirements must be defined in terms of the interceptor’s applied capability and the time value of the message. Various forms of protection include: Crypto security. Protects the information content, generally without increasing the TW product. Antijamming (AJ) security. Spreads the signal spectrum to provide discrimination against energy-limited interference by using cross-correlation or matched-filter detectors. The interference may be natural (impulse noise), inadvertent (as in amateur radio or aircraft channels), or deliberate (where the jammer may transmit continuous or burst cw, swept cw, narrow-band noise, wide-band noise, or replica or deception waveforms). Traffic-density security. Involves capability to switch data rates without altering the apparent characteristics of the spread-spectrum waveform. The TW product (processing gain) is varied inversely with the data rates. 12.50 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.51

SPREAD-SPECTRUM MODULATION SPREAD-SPECTRUM MODULATION

12.51

Transmission security. Involves spreading the bandwidth so that, beyond some range from the transmitter, the transmitted signal is buried in the natural background noise. The process gain (TW) controls the reduction in detectable range vis-à-vis a “clear” signal. Use in Radar It is usual to view radar applications as a variation on communication; that is, the return waveforms are known except with respect to noise, Doppler shift, and delay. Spectrum spreading is applicable to both cw and pulse radars. The major differentiation is in the choice of cross-correlation or matched-filter detector. The TW product is the key performance parameter, but the covariance function properties must frequently be determined to resolve Doppler shifts as well as range delays.

CLASSIFICATION OF SPREAD-SPECTRUM SIGNALS Spread-spectrum signals can be classified on the basis of their spectral occupancy versus time characteristics, as sketched in Fig. 12.4.1. Direct-sequence (DS) and pseudo-noise (PN) waveforms provide continuous full

FIGURE 12.4.1 Spectral occupancy vs. time characteristics of spreadspectrum signals.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.52

SPREAD-SPECTRUM MODULATION 12.52

MODULATORS, DEMODULATORS, AND CONVERTERS

coverage, while frequency-hopping (FH), time-dodging, and frequency-time dodging (F-TD), fill the frequencytime plane only in a long-term averaging sense. DS waveforms are pseudo-random digital streams generated by digital techniques and transmitted without significant spectral filtering. If heavy filtering is used, the signal amplitude statistics become quite noiselike, and this is called a PN waveform. In either case correlation detection is generally used because the waveform is dimensionally too large to implement a practical matched filter, and the sequence generator is relatively simple and capable of changing codes. In FH schemes the spectrum is divided into subchannels spaced orthogonally at 1/T separations. One or more (e.g., two for FSK) are selected by pseudo-random techniques for each data bit. In time-dodging schemes the signal burst time is controlled by pulse repetition methods, while F-TD combines both selections. In each case a jammer must either jam the total spectrum continuously or accept a much lower effectiveness (approaching 1/TW). Frequency-hopped signals can be generated using SAW chirp devices.

CORRELATION-DETECTION SYSTEMS The basic components of a typical DS type of link are shown in Fig. 12.4.2. The data are used to select the appropriate waveform, which is shifted to the desired rf spectrum by suppressed-carrier frequency-conversion

FIGURE 12.4.2 Direct-sequence link for spread-spectrum system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.53

SPREAD-SPECTRUM MODULATION SPREAD-SPECTRUM MODULATION

12.53

techniques, and transmitted. At the receiver identical locally generated waveforms multiply with the incoming signal. The stored reference signals are often modulated onto a local oscillator, and the incoming rf may be converted to an intermediate frequency, usually with rf or i.f. limiters. The mixing detectors are followed by linear integrate-and-dump filters, with a “greatest of” decision at the end of each period. The integrator is either a low-pass or bandpass quenchable narrowband filter. Digital techniques are increasingly being used. Synchronization is a major design and operational problem. Given a priori knowledge of the transmitted sequences, the receiver must bring its stored reference timing to within ±1/(2W) of the width of the received signal and hold it at that value. In a system having a 19-stage pn generator, a 1-MHz pn clock, and a 1-kHz data rate, the width of the correlation function is ±1/2 ms, repeating 1/2 s separations, corresponding to 524.287 clock periods. In the worst case, it would be necessary to inspect each sequence position for 1 ms; that is, 524 s would be required to acquire sync. If oscillator tolerances and/or Doppler lead to frequency uncertainties equal to or greater than the 1-kHz data rate, then parallel receivers or multiple searches are required. Ways to reduce the sync acquisition time include using jointly available timing references to start the pn generators, using shorter sequences for acquisition only; “clear” sync triggers; and paralleling detectors. Titsworth (see bibliography) discusses composite sequences which allow acquiring each component sequentially, searching N1 + N2 + N3 delays, while the composite sequence has length N1N2N3. These methods have advantages for space-vehicle ranging applications but have reduced security to jamming. Sync tracking is usually performed by measuring FIGURE 12.4.3 Sync tracking by early-late correlators. the correlation at early and late times, ±t, where t ≤ 1/W, as shown in Fig. 12.4.3. Subtracting the two provides a useful time discrimination function, which controls the pn clock. The displaced values can be obtained by two tracking-loop correlators or by time-sharing a single unit. “Dithering” the reference signal to the signal correlator can also be used, but with performance compromises. The tracking function can also be obtained by using the time derivative of one of the inputs d dX (t ) ⋅ Y (t + τ ) ϕ (τ ) = dτ XY dt

(1)

A third approach has been to add by mod 2 methods the clock to the transmitted pn waveform. The spectral envelope is altered, but very accurate peak tracking can be accomplished by phase locking to the recovered clock.

LIMITERS IN SPREAD-SPECTRUM RECEIVERS Limiters are frequently used in spread-spectrum receivers to avoid overload saturation effects, such as circuit recovery time, and the incidental phase modulation. In the usual low-input signal-noise range, the limiter tends to normalize the output noise level, which simplifies the decision circuit design. In repeater applications (e.g., satellite), a limiter is desirable to allow the transmitter to be fully modulated regardless of the input-signal

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.54

SPREAD-SPECTRUM MODULATION 12.54

MODULATORS, DEMODULATORS, AND CONVERTERS

strength. When automatic gain control (AGC) is used, the receiver is highly vulnerable to pulse jamming, while the limiter causes a slight reduction of the instantaneous signal-to-jamming ratio and a proportional reduction of transmitter power allocated to the desired signal.

DELTIC-AIDED SEARCH The sync search can be accelerated by use of deltic-aided (delay-line time compression) circuits if logic speeds permit. The basic deltic consists of a recirculating shift register (or a delay line) which stores M samples, as shown in Fig. 12.4.4. The incoming spread-spectrum signal must be sampled at a rate above W (W = bandwidth). During each intersample period the shift register is clocked through M + 1 shifts before accepting the next sample. If M ≥ 2W, a signal period at least equal to the data integration period is stored and is read out at M different delays during each period T, permitting many high-speed correlations against a similarly accelerated FIGURE 12.4.4 Delay-line time compression (deltic) (but not time-advancing) reference. configuration. For a serial-deltic and shift-register delay line the clock rate is at least 4TW2. Using a deltic with K parallel interleaved delay lines, the internal delay lines are clocked at 4TW2/K2 and the demultiplexed output has a bit rate of 4TW2/K, providing only M/K discrete delays. This technique is device-limited to moderate signal bandwidths, primarily in the acoustic range up to about 10 kHz.

WAVEFORMS The desired properties of a spread-spectrum signal include: An autocorrelation function, which is unity at t = 0 and zero elsewhere A zero cross-correlation coefficient with noise and other signals A large library of orthogonal waveforms

Maximal-Length Linear Sequences A widely used class of waveforms is the maximal-length sequence (MLS) generated by a tapped refed shift register, as shown in Fig. 12.4.5a and as one-tap unit in Fig. 12.4.5b. The mod 2 half adder () and EXCLUSIVEOR logic gate are identical for 1-bit binary signals. If analog levels +1 and –1 are substituted, respectively, for 0 and 1 logic levels, the circuit is observed to function as a l-bit multiplier.

FIGURE 12.4.5 Maximal-length-sequence (MLS) system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.55

SPREAD-SPECTRUM MODULATION SPREAD-SPECTRUM MODULATION

12.55

FIGURE 12.4.6 Spectrum of MLS system.

Pertinent properties of the MLS are as follows. Its length, for an n-stage shift register, is 2n – 1 bits. During 2n – 1 successive clock pulses, all n-bit binary numbers (except all zeros) will have been present. The autocorrelation function is unity at t = 0, and at each 2n – 1 clock pulses displacement, and 1/(2n – 1) at all other displacements. This assumes that the sequences repeat cyclically, i.e., the last bit is closed onto the first. The autocorrelation function of a single (noncyclic) MLS shows significant time side lobes. Titsworth (see bibliography) has analyzed the self-noise of incomplete integration over p chips, obtaining for MLSs, s2(t) = ( p – t) (p2 – 1)/p3t

(2)

which approaches 1/t for the usual case of p >> t. Since t ≈ TW, the self-noise component is usually negligible. Another self-noise component is frequently present owing to amplitude and dispersion differences, caused by filtering, propagation effects, and circuit nonlinearities. In addition to intentional clipping, the correlation multiplier is frequently a balanced modulator, which is linear only to the smaller signal, unless deliberately operated in a bilinear range. The power spectrum is shown in Fig. 12.4.6. The envelope has a (sin2 X)/X2 shape (X = pw /wclock), while the individual lines are separated by wclock/(2n – 1). An upper bound on the number of MLS for an n-stage shift register is given in terms of the Euler f function: Nu = f(2n – 1)/n ≤ 2(n – log2n) where f(k) is the number of positive integers less than k, including 1, which are relatively prime to k.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(3)

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.56

MICROWAVE AMPLIFIERS AND OSCILLATORS

CHAPTER 12.5

OPTICAL MODULATION AND DEMODULATION Joseph L. Chovan

MODULATION OF BEAMS OF RADIATION This discussion of optical modulators is restricted to devices that operate on a directed beam of optical energy to control its intensity, phase, or frequency, according to some time-varying modulating signal. Devices that deflect a light beam or spatially modulate a light beam, such as light-valve projectors, are treated in Chap. 21. Phase or frequency modulation requires a coherent light source, such as a laser. Optical heterodyning is then used to shift the received signal to lower frequencies, where conventional FM demodulation techniques can be applied. Intensity modulation can be used on incoherent as well as coherent light sources. However, the properties of some types of intensity modulators are wavelength-dependent. Such modulators are restricted to monochromatic operation but not limited to the extremely narrow laser line widths required for frequency modulation. Optical modulation depends on either perturbing the optical properties of some material with a modulating signal or mechanical motion to interact with the light beam. Modulation bandwidths of mechanical modulators are limited by the inertia of the moving masses. Optical-index modulators generally have a greater modulation bandwidth but typically require critical and expensive optical materials. Optical-index modulation can be achieved with electric or magnetic fields or by mechanical stress. Typical modulator configurations are presented below, as in heterodyning, which is often useful in demodulation. Optical modulation can also be achieved using semiconductor junctions.

OPTICAL-INDEX MODULATION: ELECTRIC FIELD MODULATION Pockels and Kerr Effects In some materials, an electric field vector E can produce a displacement vector D whose direction and magnitude depend on the orientation of the material. Such a material can be completely characterized in terms of three independent dielectric constants associated with three mutually perpendicular natural directions of the material. If all three dielectric constants are equal, the material is isotropic. If two are equal and one is not, the material is uniaxial. If all three are unequal, the material is biaxial. The optical properties of such a material can be described in terms of the ellipsoid of wave normals (Fig. 12.5.1). This is an ellipsoid whose semiaxes are the square roots of the associated dielectric constants. The behavior of any plane monochromatic wave through the medium can be determined from the ellipse formed by the intersection of the ellipsoid with a plane through the center of the ellipsoid and perpendicular to the direction of wave travel. The instantaneous electric field vector E associated with the optical wave has components 12.56 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.57

OPTICAL MODULATION AND DEMODULATION OPTICAL MODULATION AND DEMODULATION

12.57

along the two axes of this ellipse. Each of these components travels with a phase velocity that is inversely proportional to the length of the associated ellipse axis. Thus there is a differential phase shift between the two orthogonal components of the electric field vector after it has traveled some distance through such a birefringent medium. The two orthogonal components of the vector vary sinusoidally with time but have a phase difference between them, which results in a vector whose magnitude and direction vary to trace out an ellipse once during each optical cycle. Thus linear polarization is converted into elliptical polarization in a birefringent medium. In some materials it is possible to induce a perturbation in one or more of the ellipsoid axes by applying an external electric field. This is the electrooptical effect. The electrooptical effect is most commonly used in optical modulators presently available. More detailed configurations using these effects are discussed later. Kaminow and Turner (1966), present design considerations for various FIGURE 12.5.1 Ellipsoid of wave normals. configurations and tabulates material properties.

Stark Effect Materials absorb and emit optical energy at frequencies which depend on molecular or atomic resonances characteristic of the material. In some materials an externally applied electric field can perturb these natural resonances. This is known as the Stark effect. Kaminow and Turner (1966) discusses a modulator for the CO2 laser on the 3- to 22-mm region. The laser output is passed through an absorption cell whose natural absorption frequency is varied by the modulating signal, using the Stark effect. Since the laser frequency remains fixed, the amount of absorption depends on how closely the absorption cell is tuned to the laser frequency––intensity modulation results.

MAGNETIC FIELD MODULATION Faraday Effect Two equal-length vectors circularly rotating at equal rates in opposite directions in space combine to give a nonrotating resultant whose direction in space depends on the relative phase between the counterrotating components. Thus any linearly polarized light wave can be considered to consist of equal right and left circularly polarized waves. In a material which exhibits the Faraday effect, an externally applied magnetic field causes a difference in the phase velocities of right and left circularly polarized waves traveling along the direction of the applied magnetic field. This results in a rotation of the electric field vector of the optical wave as it travels through the material. The amount of the rotation is controlled by the strength of a modulating current producing the magnetic field. Zeeman Effect In some materials the natural resonance frequencies at which the material emits or absorbs optical energy can be perturbed by an externally applied magnetic field. This is known as the Zeeman effect. Intensity modulation can be achieved using an absorption cell modulated by a magnetizing current in much the same manner as the Stark effect absorption cell is used. The Zeeman effect has also been used to tune the frequency at which the active material in a laser emits.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.58

OPTICAL MODULATION AND DEMODULATION 12.58

MODULATORS, DEMODULATORS, AND CONVERTERS

MECHANICAL-STRESS MODULATION In some materials the ellipsoid of optical-wave normals can be perturbed by mechanical stress. An acoustic wave traveling through such a medium is a propagating stress wave that produces a propagating wave of perturbation in the optical index. When a sinusoidal acoustic wave produces a sinusoidal variation in the optical index of a thin isotropic medium, the medium can be considered, at any instant of time, as a simple phase grating. Such a grating diffracts a collimated beam of coherent light into discrete angles whose separation is inversely proportional to the spatial period of the grating. This situation is analogous to an rf carrier phase-modulated by a sine wave. A series of sidebands results which correspond to the various orders of diffracted light. The amplitude of the mth order is given by an mthorder Bessel function whose argument depends on the peak phase deviation produced by the modulating signal. The phases of the sidebands are the appropriate integral multiples of the phase of the modulating signal. The mth order of diffracted light has its optical frequency shifted by m times the acoustic frequency. The frequency is increased for positive orders and decreased for negative orders. Similarly, a thick acoustic grating refracts light mainly at discrete input angles. This condition is known as Bragg reflection and is the basis for the Bragg modulator (Fig. 12.5.2). In the Bragg modulator, essentially all the incident light can be refracted into the desired order, and the optical frequency is shifted by the appropriate integral multiple of the acoustic frequency. Figure 12.5.2 shows the geometry of a typical Bragg modulator. The input angles for which Bragg modulation occurs are given by sin q = ml /2Λ

FIGURE 12.5.2 The Bragg modulator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.59

OPTICAL MODULATION AND DEMODULATION OPTICAL MODULATION AND DEMODULATION

12.59

where q = angle between propagation of input optical beam and planar acoustic wavefronts l = optical wavelength in medium Λ = acoustic wavelength in medium m = ±1, ±2, ±3, . . . mq = angle between propagation direction of output optical beam and planar acoustic wavefronts The ratio of optical to acoustic wavelength is typically quite small, and m is a low integer, so that the angle q is very small. Critical alignment is thus required between the acoustic wavefronts and the input light beam. If the modulation bandwidth of the acoustic signal is broad, the acoustic wavelength varies, so that there is a corresponding variation in the angle q for which Bragg reflection occurs. To overcome this problem, a phased array of acoustic transducers is often used to steer the angle of the acoustic wave as a function of frequency in the desired manner. A limitation on bandwidth is the acoustic transit time across the optical beam. Since the phase grating in the optical beam at any instant of time must be essentially constant frequency if all the light is to be diffracted at the same angle, the bandwidth is limited so that only small changes can occur in this time interval.

MODULATOR CONFIGURATIONS: INTENSITY MODULATION Polarization Changes Linearly polarized light can be passed through a medium exhibiting an electrooptical effect and the output beam passed through another polarizer. The modulating electric field controls the eccentricity and orientation of the elliptical polarization and hence the magnitude of the component in the direction of the output polarizer. Typically, the input linear polarization is oriented to have equal components along the fast and slow axes of the birefringent medium, and the output polarizer is orthogonal to the input polarizer. The modulating field causes a phase differential varying from 0 to p rad. This causes the polarization to change from linear (at 0) to circular (at p/2) to linear normal to the input polarization (at p). Thus the intensity passing through the output polarizer varies from 0 to 100 percent as the phase differential varies from 0 to p rad. Figure 12.5.3 shows this typical configuration. The following equations relate the optical intensity transmission of this configuration to the modulation. Io/Ii = 1/2(1 – cos f) where Io = output optical intensity Ii = input optical intensity f = differential phase shift between fast and slow axes. In the Pockels effect the differential phase shift is linearly related to applied voltage; in the Kerr effect it is related to the voltage squared.  π v / V Pockels effect φ= 2  π (v / V ) Kerr effect where v is the modulation voltage and V is the voltage to produce p rad differential phase shift. Figure 12.5.4 shows the intensity transmission given by the above expression. The most linear part of the modulation curve is at f = p/2. Often a quarter-wave plate is added in series with the electrooptical material to provide this fixed bias at p/2. A fixed-bias voltage on the electrooptical material can also be used. This arrangement is probably the most commonly used broadband intensity modulator. Early modulators of this type used a uniaxial Pockels cell with the electric field in the direction of optical propagation. In this arrangement, the induced phase differential is directly proportional to the opticalpath length, but the electric field is inversely proportional to this path length (at fixed voltage). Thus the phase differential is independent of the path length and depends only on applied voltage. Typical materials require several kilovolts for a differential phase shift of p in the visible-light region.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.60

OPTICAL MODULATION AND DEMODULATION 12.60

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.5.3 Electrooptical intensity modulator.

Since the Pockels cell is essentially a capacitor, the energy stored in it is 1/2 CV2 where C is the capacitance and V is the voltage. This capacitor must be discharged and charged during each modulation cycle. Discharge is typically done through a load resistor, where this energy is dissipated. The high voltages involved mean that the dissipated power at high modulation rates is appreciable. The high-voltage problem can be overcome by passing light through the medium in a direction normal to the applied electric field. This permits a short distance between the electrodes (so that a high-E field is obtained from a low voltage) and a long optical path in the orthogonal direction (so that the cumulative phase differential is experienced). Unfortunately, materials available are typically uniaxial, having a high eccentricity in the absence of electric fields. When oriented in a direction that permits the modulating electric field to be orthogonal to the propagation direction, the material has an inherent phase differential which is orders of magnitude greater than that induced by the modulating field. Furthermore, minor temperature variations cause perturbations in this phase differential which are large compared with those caused by modulation. This difficulty is overcome by cascading two crystals which are carefully oriented so that temperature effects in one are compensated for by temperature effects in the other. The modulation electrodes are then connected so that their effects add. Commercially available electrooptical modulators are of this type. The Kerr effect is often used in a similar arrangement. Kerr cells containing nitrobenzene are commonly used as high-speed optical shutters. Polarization rotation produced by the Faraday effect is also used in intensity modulation by passing through an output polarizer in a manner similar to that discussed above. The Faraday effect is more commonly used at wavelengths where materials exhibiting the electrooptical effect are not readily available. Controlled Absorption As noted above, the frequency at which a material absorbs energy because of molecular or atomic resonances can be tuned over some small range in materials exhibiting the Stark or Zeeman effect. Laser spectral widths are typically narrow compared with such an absorption line width. Thus the absorption of the narrow laser line can be modulated by tuning the absorption frequency over a range near the laser frequency. Although such modulators have been used, they are not as common as the electrooptical modulators discussed above. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.61

OPTICAL MODULATION AND DEMODULATION OPTICAL MODULATION AND DEMODULATION

12.61

FIGURE 12.5.4 Transmission of electrooptical intensity modulator.

PHASE AND FREQUENCY MODULATION OF BEAMS Laser-Cavity Modulation The distance between mirrors in a laser cavity must be an integral number of wavelengths. If this distance is changed by a slight amount, the laser frequency changes to maintain an integral number. The following equation relates the change in cavity length to the change in frequency: ∆f =

C ∆L L λ

where ∆ f = change in optical frequency ∆L = change in laser-cavity length L = laser-cavity length l = optical wavelength of laser output C = velocity of light in laser cavity. In a cavity 1 m long, a change in mirror position of one optical wavelength produces about 300 MHz frequency shift. Thus a laser can be frequency-modulated by moving one of its mirrors with an acoustic transducer, but the mass of the transducer and mirror limit the modulation bandwidths that can be achieved. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.62

OPTICAL MODULATION AND DEMODULATION 12.62

MODULATORS, DEMODULATORS, AND CONVERTERS

An electrooptical cell can be used in a laser optical cavity to provide changes in the optical-path length. The polarization is oriented so that it lies entirely along the axis of the modulated electrooptical material. This produces the same effect as moving the mirror but without the inertial restrictions of the mirror’s mass. Under such conditions, the ultimate modulation bandwidth is limited by the Q of the laser cavity. A light beam undergoes several reflections across the cavity, depending on the Q, before an appreciable portion of it is coupled out. The laser frequency must remain essentially constant during the transit time required for these multiple reflections. This limits the upper modulation frequency. Modulation of the laser-cavity length produces a conventional FM signal with modulating signal directly proportional to change in laser-cavity length. Demodulation is conveniently accomplished by optical heterodyning to lower rf frequencies where conventional FM demodulation techniques can be used.

EXTERNAL MODULATION The Bragg modulator (Fig. 12.5.2) is commonly used to modulate the optical frequency. As such it produces a single-sideband suppressed-carrier type of modulation. Demodulation can be achieved by optical heterodyning to lower rf frequencies, where conventional techniques can be employed for this type of modulation. It is also possible to reinsert the carrier at the transmitter for a frequency reference. This is done by using optical-beam splitters to combine a portion of the unmodulated laser beam with the Bragg modulator output. Conventional double-sideband amplitude modulation has also been achieved by simultaneously modulating two laser beams (derived from the same source) with a common Bragg modulator to obtain signals shifted up and down. Optical-beam splitters are used to combine both signals with an unmodulated carrier. Conventional power detection demodulates such a signal. Optical phase modulation is commonly accomplished by passing the laser output beam through an electrooptical material, with the polarization vector oriented along the modulated ellipsoid axis of the material. Demodulation is conveniently achieved by optical heterodyning to rf frequencies, FM demodulation, and integrating to recover the phase modulation in the usual manner. For low modulation bandwidths, the electrooptical material can be replaced by a mechanically driven mirror. The light reflected from the mirror is phase modulated by the changes in the mirror position. This effect is often described in terms of the Doppler frequency shift, which is directly proportional to the mirror velocity and inversely proportional to the optical wavelength.

TRAVELING-WAVE MODULATION In the electrooptical and magnetooptical modulators described thus far, it is assumed that the modulating signal is essentially constant during the optical transit time through the material. This sets a basic limit on the highest modulating frequency that can be used in a lumped modulator. This problem is overcome in a traveling-wave modulator. The optical wave and the modulation signal propagate with equal phase velocities through the modulating medium, allowing the modulating fields to act on the optical wave over a long path, regardless of how rapidly the modulating fields are changing. The degree to which the two phase velocities can be matched determines the maximum interaction length possible.

OPTICAL HETERODYNING Two collimated optical beams, derived from the same laser source and illuminating a common surface, produce straight-line interference fringes. The distance between fringes is inversely proportional to the angle between the beams. Shifting the phase of one of the beams results in a translation of the interference pattern, such that a 2p-rad phase shift translates the pattern by a complete cycle. An optical detector having a sensing area small compared with the fringe spacing has a sinusoidal output as the sinusoidal intensity of the interference pattern translates across the detector.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.63

OPTICAL MODULATION AND DEMODULATION OPTICAL MODULATION AND DEMODULATION

12.63

A frequency difference between the two optical beams produces a phase difference between the beams that changes at a constant rate with time. This causes the fringe pattern to translate across the detector at a constant rate, producing an output at the difference frequency. This technique is known as optical heterodyning in which one of the beams is the signal beam, the other the local oscillator. The effect of the optical alignment between the beams is evident. As the angle between the two collimated beams is reduced, the spacing between the interference fringes increases, until the spacing becomes large compared with the overall beam size. This permits a large detector which uses all the light in the beam. If converging or diverging beams are used instead of collimated beams, the situation is similar, except that the interference fringes are curved instead of straight. Making the image of the local-oscillator point coincide with the image of the signal-beam point causes the desired infinite fringe spacing. Optical heterodyning provides a convenient solution to several possible problems in optical demodulation. In systems where a technique other than simple amplitude modulation has been used (e.g., single-sideband, frequency, or phase modulation), optical heterodyning permits shifting to frequencies where established demodulation techniques are readily available. In systems where background radiation, such as from the sun, is a problem, heterodyning permits shifting to lower frequencies, so that filtering to the modulation bandwidth removes most of the broadband background radiation. The required phase front alignment also eliminates background radiation from spatial positions other than that of the signal source. Many systems are limited by thermal noise in the detector and/or front-end amplifier. Cooled detectors and elaborate amplifiers are often used to reduce this noise to the point that photon noise in the signal itself dominates. This limit also can be achieved in an optical heterodyne system with noncooled detector and normal amplifiers by increasing the local-oscillator power to the point where photon noise in the local oscillator is the dominant noise source. Under these conditions, the signal-to-noise power ratio is given by the following equation: S/N = hlP/2hBC where S/N = signal-power-noise-power ratio h = quantum efficiency of photo detector l = optical wavelength h = Planck’s constant C = velocity of light B = bandwidth over which S/N is evaluated P = optical signal power received by detector

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.64

MICROWAVE AMPLIFIERS AND OSCILLATORS

CHAPTER 12.6

FREQUENCY CONVERTERS AND DETECTORS Glenn B. Gawler

GENERAL CONSIDERATIONS OF FREQUENCY CONVERTERS A frequency converter usually consists of an oscillator (called a local oscillator or LO) and a device used as a mixer. The mixing device is either nonlinear or its transfer parameter can be made to vary in synchronism with the local oscillator. A signal voltage with information in a frequency band centered at frequency fs enters the frequency converter, and the information is reproduced in the intermediate-frequency (i.f.) voltage leaving the converter. If the local-oscillator frequency is designated fLO, the i.f. voltage information is centered about a frequency fif = fLO ± fs. The situation is shown pictorially in Fig. 12.6.1. Characteristics of interest for design in systems using frequency converters are gain, noise figure, image rejection, spurious responses, intermodulation and cross-modulation capability, desensitization, area local-oscillator to rf, and to i.f. isolation. These characteristics will be discussed at length in the descriptions of different types of frequency-converter mixers and their uses in various systems. First, explanations are in order for the above terms. Frequency-Converter Gain. The available power gain of a frequency converter is the ratio of power available from the i.f. port to the power available at the signal port. Similar definitions apply for transducer gain and power gain. Noise Figure of Frequency Converter. The noise factor is the ratio of noise power available at the i.f. port to the noise power available at the i.f. port because of the source alone at the signal port. Image Rejection. For difference mixing fif = fLO – fs and the image is 2fLO − fs. For sum mixing fif = fLO + fs, and the image is 2fLO + fs. An undesired signal at the difference mixing frequency 2fLO – fs results in energy at the i.f. port. This condition is called image response and attenuation of the image response is image rejection, measured in decibels. Spurious Responses. Spurious external signals reach the mixer and result in generation of undesired frequencies that may fall into the intermediate-frequency band. The condition for an interference in the i.f. band is mf¢s ± nf1 = ± fif where m and n are integers and f¢s represents spurious frequencies at the signal port of the mixer. There is a strong local station in the broadcast band at 810 kHz and a weak distant station at 580 kHz. A receiver is tuned to the distant station, and a whistle, or beat, at 5 kHz is heard on the receiver (refer to Fig. 12.6.2).

Example.

12.64 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.65

FREQUENCY CONVERTERS AND DETECTORS FREQUENCY CONVERTERS AND DETECTORS

12.65

FIGURE 12.6.1 Frequency-converter terminals and spectrum.

An analysis shows that the second harmonic of the local oscillator interacts with the second harmonic of the 810-kHz signal to produce a mixer output at 450 kHz in the i.f. band of the receiver: 580 + 455 = 1035 kHz = LO frequency 2 × 1035 – 2 × 810 = 450 kHz = i.f. interference frequency The interference at 450 kHz then mixes with the 455-kHz desired signal in the second detector to produce the 5-kHz whistle. Notice that if the receiver is slightly detuned upward by 5 kHz, the whistle will zerobeat. Further upward detuning will create a whistle of increasing frequency.

INTERMODULATION Intermodulation is particularly troublesome because a pair of strong signals that pass through a receiver preselector can cause interference in the i.f. passband, even though the strong signals themselves do not enter the passband. Consider two undesired signals at 97 MHz passing through a superheterodyne receiver tuned to 100 MHz. Suppose, further, that the i.f. is so selective that a perfect mixer allows no response to the signals (see Fig. 12.6.3). Third-order intermodulation in a physically realizable mixer will result in interfering signals at the i.f. frequency and 9 MHz away (corresponding to 100 and 91 MHz rf frequencies, respectively). Fifth-order intermodulation will produce interferences 3 and 12 MHz from the intermediate frequency (103 and 88 MHz rf frequencies). There is a formula for variation of intermodulation products that is quite useful. Figure 12.6.4 shows typical variations of desired output and intermodulation with input power level. Desired output increases 1 dB for each 1-dB increase of input level, whereas third-order intermodulation increases 3 dB for each 1-dB increase of input level. At some point the mixer saturates and the above behavior no longer obtains. Since the interference of the intermodulation product is primarily of interest near the system sensitivity limit (usually somewhere below –20 dBm), the 1 dB per 1 dB and 3 dB per 1 dB patterns hold. The formula can be written P21 = 2PN + PF – 2P/21 where P21 = level of intermodulation product (dBm) PN = power level of interfering signal nearest P21 PF = power interfering signal farthest from P21

FIGURE 12.6.2 Spurious response in AM receiver.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.66

FREQUENCY CONVERTERS AND DETECTORS 12.66

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.6.3 Spurious-response analysis.

P/21 is the third-order intercept power. For proper orientation, fN = 97 MHz, fF = 94 MHz, f21 = 100 MHz in Fig. 12.6.5. The intercept power is a function of frequency. It can be used for comparisons between mixer designs and for determining allowable preselector gain in a receiving system.

FREQUENCY-CONVERTER ISOLATION There are two paths in a mixer where isolation is important. The so-called balanced mixers give some isolation of the local-oscillator energy at the rf port. This keeps the superheterodyne receiver from radiating excessively. The doubly balanced mixers also give rf-to-i.f. isolation. This keeps interference in the receiver rf environment from penetrating the mixer directly at the i.f. frequency. Less important, but still significant, is the LO-to-i.f. isolation. This keeps LO energy from overloading the i.f. amplifier. Also, in multiple-conversion receivers low LO-to-i.f. leakage minimizes spurious responses in subsequent frequency converters. Desensitization A strong signal in the rf bandwidth, not directly converted to i.f., drives the operating point of the mixer into a nonlinear region. The mixer gain is then either decreased or increased. In radar, the characteristic of concern is pulse desensitization. In television receivers the characteristic is called cross-modulation. Here the strong

FIGURE 12.6.4 Third-order intermodulation intercept power.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.67

FREQUENCY CONVERTERS AND DETECTORS FREQUENCY CONVERTERS AND DETECTORS

12.67

FIGURE 12.6.5 Intermodulation in a superheterodyne receiver.

undesired adjacent TV station modulates the mixer gain, especially during synchronization intervals, where the signal is strongest. The result appears in the desired signal as a contrast modulation of picture with the pattern of the undesired sync periods, corresponding to mixer gain pumping by the strong adjacent channel.

SCHOTTKY DIODE MIXERS The Schottky barrier diode is an improvement over the point-contact diode. The Schottky diode has two features that make it very valuable in high-frequency mixers: (1) it has low series resistance and virtually no charge storage, which results in low conversion loss; (2) it has noise-temperature ratio very close to unity. The noise factor of a mixed-i.f. amplifier cascade is F = LM(tD + Fif – 1) where LM = mixer loss tD = diode noise-temperature ratio Fif = i.f. noise factor Since tD is near unity and LM is in the range of 2.4 to 6 dB, overall noise factor is quite good, with Fif near 1.5 dB in well-designed systems. The complete conversion matrix involves LO harmonic sums and differences, as well as signal, i.f., and image frequencies. They restrict their treatment of crystal rectifiers to the third-order matrix  I1     I 2  = [Y ]  *  I3 

V1   y11    V2  Y =  y21  * y  31 V3 

y12 y22 y32

y13   y23  y33 

where 1 denotes signal port; 2, i.f. port; and 3, image port. With point-contact diodes, the series resistance is so large that not much improvement is realized by terminating the image frequency, and terminating the other frequencies involved is less significant. With the advent of Schottky barrier diodes, which have much smaller series resistances, proper termination of pertinent frequencies, other than signal and i.f. frequencies, results in a minimizing of conversion loss. This, in turn, leads to a minimizing of noise figure. Several different configurations are used with Schottky mixers. Figure 12.6.6 shows an image-rejection mixer, which is used for low i.f. frequency systems where rf filtering of the image is impractical. There is a general rule of thumb for obtaining good intermodulation, cross-modulation, and desensitizable performance in mixers. It has been found experimentally that pumping a mixer harder extends its range of

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.68

FREQUENCY CONVERTERS AND DETECTORS 12.68

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.6.6 Mixer designed for image rejection.

linear operation. The point-contact diode had a rapidly increasing noise figure with high LO power level and could easily burn out with too much power. The Schottky diode, however, degrades in noise figure relatively slowly with increasing LO power, and it can tolerate quite large amounts of power without burnout. There is a limit to this process of increasing LO power: the Schottky diode series resistance begins to appear nonlinear. This leads to another rule of thumb: pump the diode between two linear regions, and spend as little time as possible in the transition region. Application of these two rules leads to the doubly balanced Schottky mixer. The reason for this is that one pair of diodes conducts hard and holds the other pair off. Hence large LO power is required, and one diode pair is conducting well into its linear region while the other diode pair is held in its linear nonconducting region.

DOUBLY BALANCED MIXERS The diode doubly balanced mixer, or ring modulator, is shown in Fig. 12.6.7. The doubly balanced mixer is used up to and beyond 1 GHz in this configuration. The noise-figure optimization process previously discussed applies to this type of mixer. It exhibits good LO-to-rf and LO-to-i.f. isolation, as shown in Fig. 12.6.8. Typical published data on mixers quote 30 dB rf-to-i.f. isolation below 50 MHz and 20 dB isolation from 50 to 500 MHz. Another feature of balanced mixers is their LO noise-suppression capability. Modern mixers using Schottky diodes in a ring modulator provide somewhat better LO noise suppression. The ring modulator provides suppression only to AM LO noise, not FM noise. FIGURE 12.6.7 Doubly balanced mixer.

PARAMETRIC CONVERTERS Parametric converters make use of time-varying energy-storage elements. Their operation is in many ways similar to that of parametric amplifiers (see Section 11). The difference is that output and input frequencies are the same in parametric amplifiers, while the frequencies differ in parametric converters. The device most widely Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.69

FREQUENCY CONVERTERS AND DETECTORS FREQUENCY CONVERTERS AND DETECTORS

12.69

FIGURE 12.6.8 Generation of 920-kHz beat in TV tuners.

used for microwave parametric converters today is the varactor diode, which has a voltage-dependent junction capacitance. The time variation of varactor capacitance is provided by a local oscillator, usually called the pump. Attainable gain of a parametric converter is limited by the ratio of output to input frequencies. Therefore up conversion is generally used to achieve some gain. Because lower-sideband up conversion results in negative resistance, the upper sideband is generally used. This results in simpler circuit elements to achieve stability. There is a distinct advantage to up conversion; image rejection is easily achievable by a simple low-pass filter.

TRANSISTOR MIXERS One of the original concerns in transistor mixers was their noise performance. The base spreading resistance rb is very important in noise performance. The reason is that mixing occurs across the base-emitter junction; then the i.f. signal is amplified by transistor action; however, rb is a lossy part of the termination at the i.f. signal, image, and all other frequencies present in the mixing process. Hence rb dissipates some energy at each of the frequencies present, and all these contributions add to appear as a loss in the signal-to-i.f. conversion. This loss, in turn, degrades noise figure. Manufacturers do not promote transistors used as mixers, probably because of their intermodulation and spurious-response performance. Estimates of intermodulation intercept power go as high as +12 dBm, while one measurement gave +5 dBm at 200 MHz; however, a cascade transistor mixer is used in a commercial VHF television tuner.

MEASUREMENT OF SPURIOUS RESPONSES Figure 12.6.9 shows an arrangement for measuring mixer spurious responses. The filter following the signal generator implies that generator harmonics are down, say 40 dB. This ensures that frequency-multiplying action is owing only to the mixer under test. The attenuator following the mixer can be used to be sure that a spurious response of the receiver is not being measured. That is, a 6-dB change in attenuator setting should be accompanied by a 6-dB change on the indicator. Generally the most convenient way of performing the spurious-response test is first to obtain an indication of the indicator. Then tune the signal generator to the desired frequency and record the level required to obtain the original response. This should be repeated at one or two more levels of the undesired signal to ensure that the spur follows the appropriate laws. For example, if the response is fourth-order (four times the signal frequency ±n Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.70

FREQUENCY CONVERTERS AND DETECTORS 12.70

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.6.9 Test equipment for measuring mixer spurious responses.

times the LO frequency), the measured value should change 4 dB for a 1-dB change in undesired frequency level. The order of the spurious response can be determined by either of two methods. The first method is simply by knowing with some accuracy the undesired signal frequency and the LO frequency and then determining the harmonic numbers required to obtain the i.f. frequency. The other technique entails observing the incremental changes of the i.f. frequency with known changes in the undesired signal frequency and the LO signal frequency. This completes the measurement for one spurious response. The procedure should be repeated for each of the spurious responses to be measured. The intermodulation test setup is shown in Fig. 12.6.10. In general, a diplexer is preferable to a directional coupler for keeping generator 1 signal out of generator 2. This is necessary so that the measurement is not limited by the test setup. A good idea would be to establish that no third-order intermodulation occurs becuase of the setup alone. To do this, initially remove the mixer-LO circuit. Then tune generator 1 off from center frequency to about 10 or 20 dB down on the skirt of the receiver preselector. Tune generator 2 twice this amount from the receiver center frequency. Set generator levels equal and at some initial value, say –30 dBm. Then vary one generator frequency slightly and look for a response peak on the indicator. If none is noticed, increase the generator level to –20 dBm and repeat the procedure. Usually, except for very good receivers, the thirdorder intermodulation response is found. Vary the attenuator by 6 dB, and look for a 6-dB variation in the indicator reading. If the latter is not 6 dB but 18 dB, intermodulation is occurring in the receiver. If the indicator variation is between 6 and 18 dB, intermodulation is occurring in the circuitry preceding the attenuator and in the receiver. To obtain trustworthy measurements with a mixer in the test position, the indicator should read at least 20 dB greater than without the mixer, while the generator levels should be lower by mixer gain +10 dB than they were without the mixer. This ensures that the test setup is contributing an insignificant amount to the intermodulation measurement. With the mixer in test position and the above conditions satisfied, obtain a reading on the indicator and let the power referred to the mixer input be denoted by P (dBm). Turn down both generator levels, and retune generator 1 to center frequency. Adjust generator 1 level to obtain the previous indicator reading. This essentially calibrates the measurement setup. Denote generator 1 level referred to the mixer input by P21 dBm. Then the intermodulation intercept power is given by P/21 dBm = (3P – P21)/2 The subscripts on intercept power P/21 refer to second order for the near frequency and first order for the far frequency (see Fig. 12.6.4).

FIGURE 12.6.10 Test equivalent for measurement of mixer intermodulation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.71

FREQUENCY CONVERTERS AND DETECTORS FREQUENCY CONVERTERS AND DETECTORS

12.71

The procedure should be repeated for one or two lower values of P. The corresponding values of P/21 should asymptomatically approach a constant value. The constant value of P/21 so obtained is then a valid number for predicting behavior of the mixer near its sensitivity limit.

DETECTORS (FREQUENCY DECONVERTERS) Detectors have become more complex and versatile since the advent of integrated circuits. Up to the mid-1950s most radio receivers used the standard single-diode envelope detector for AM and a Foster-Seeley discriminator or ratio detector for FM. Today, integrated circuits are available with i.f. amplifier, detector, and audioamplifier functions in a single package. Figure 12.6.11 shows three conventional AM detectors. In Fig. 12.6.11a an envelope detector is shown. In order for the detected output to follow the modulation envelope faithfully, the RC time constant must be chosen so that RC < 1/wm, where wm is the maximum angular modulation frequency in the envelope. Figure 12.6.11b shows a peak detector. Here the RC time constant is chosen large, so that C stays charged to the peak voltage. Usually, the time constant depends on the application. In a television fieldstrength meter, the charge on C should not decay significantly between horizontal sync pulses separated by 62.5 ms. Hence a time constant of 1 to 6 ms should suffice. On the other hand, an AGC detector for single-sideband use should have a time constant of 1 s or longer. Figure 12.6.11c shows a product (synchronous) detector. This type of detector has been used since the advent of single-sideband transmission. The product detector multiplies the signal with the LO, or beat frequency oscillator (BFO), to produce outputs at sum and difference frequencies. Then the low-pass filter passes only the difference frequency. The result is a clean demodulation with a minimum of distortion for singlesideband signals. The two classical FM detectors widely used up to the present are the Foster-Seeley discriminator and the ratio detector. Figure 12.6.12 shows the Foster-Seeley discriminator and its phasor diagrams. The circuit consists of a double-tuned transformer, with primary and secondary voltages series-connected. The diode connected to point A detects the peak value of V1 + V2/2, and the diode at B detects the peak value of V1 – V2/2. The audio output is then the difference between the detected voltages. When the incoming frequency is in the center of the passband, V2 is in quadrature with V1, the detected voltages are FIGURE 12.6.11 AM detectors: (a) AM envelope equal, and audio output is zero. Below the center fredetector; (b) peak detector; (c) product detector. quency the detected voltage from B decreases, while that from A increases, and the audio output is positive. By similar reasoning, an incoming frequency above band center produces a negative audio output. Optimum linearity requires that KQ = 2, where K is the transformer coupling and Q is the primary and secondary quality factor. Figure 12.6.12c shows a ratio detector, which has an advantage over the Foster-Seeley discriminator in being relatively insensitive to AM. The ratio detector uses a tertiary winding (winding 3) instead of the primary voltage, and one diode is reversed; however, the phasor diagrams also apply to the ratio detector. The AM

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.72

FREQUENCY CONVERTERS AND DETECTORS 12.72

MODULATORS, DEMODULATORS, AND CONVERTERS

FIGURE 12.6.12 FM detectors: (a) Foster-Seeley FM discriminator; (b) phasor diagrams; (c) ratio detector.

rejection feature results from choosing the (R1 + R2)C time constant large compared with the lowest frequency to be faithfully reproduced. The voltages EOA and EOB represent the detected values of rf voltages across OA and OB, respectively. With the large time constant above, voltage on C changes slowly with AM and the conduction angles of the diodes vary, loading the tuned circuit so as to keep the rf amplitudes relatively constant. Capacitor C0 is chosen to be an rf short circuit but small enough to follow the required audio variations. In the AM rejection process, AF voltage on C0 does not follow the AM because the charge put on by one diode is removed by the other diode. With FM variations on the rf, voltage on C0 changes to reach the condition, again, that charge put on C0 by one diode is removed by the other diode. The ratio detector is generally used with little or no previous limiting of the rf, while the FosterSeeley discriminator must be preceded by limiters to provide AM rejection. With the recent trend toward integrated circuits, there has been increased interest in using phase-locked loops and product detectors. These techniques have been selected because they do not require inductors, which are not readily available in integrated form. Figure 12.6.13 shows a phase-locked loop (PLL) as an FIGURE 12.6.13 FM detector using phase-locked loop.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_12.qxd

10/28/04

10:58 AM

Page 12.73

FREQUENCY CONVERTERS AND DETECTORS FREQUENCY CONVERTERS AND DETECTORS

12.73

FM detector. The phase comparator merely provides a dc voltage proportional to the difference in phase between signals represented by fM and f. Initially, f and fM are unequal, but because of high loop gain, GH >> 1, f and fM quickly become locked and stay locked. Then as fM varies, f follows exactly. But because of the high loop gain, response is essentially 1/H, which is the voltage-controlled oscillator (VCO) characteristic. Hence the PLL serves as an FM detector. AM product detectors also make use of the PLL to provide a carrier locked to the incoming signal carrier. The output of the VCO is used to drive the product detector. Probably one of the most stringent uses of the product detector is in an FM stereo decoder. The left minus right (L − R) subcarrier is located at 38 kHz with sidebands from 23 to 53 kHz. There may also be an SCA signal centered about 67 kHz which is used to provide a music service for restaurants and commercial offices. The L − R product detector is driven by a 38-kHz VCO, the output of which also goes to a 2-to-1 counter. The counter output is compared with the 19kHz pilot signal in a phase comparator, and the phase-comparator output then controls the VCO. Because of the relatively small pilot signal and the presence of L + R, L − R, and SCA information, the requirement for phase locking is stringent.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 13

POWER ELECTRONICS Power electronics deals with the application of electronic devices and associated components to the conversion, control, and conditioning of electric power. The primary characteristics of electric power, which are subject to control, include its basic form (ac or dc), its effective voltage or current (including the limiting cases of initiation and interruption of conduction), and its frequency and power factor (if ac). The control of electric power is a means for achieving control or regulation of one or more nonelectrical parameters, e.g., the speed of a motor, the temperature of an oven, the rate of an electrochemical process, or the intensity of lighting. Aside from the obvious difference in function, power-electronics technology differs markedly from the technology of low-level electronics for information processing in that much greater emphasis is required on achieving high-power efficiency. Few low-level circuits exceed a power efficiency of 15 percent, but few power circuits can tolerate a power efficiency less than 85 percent. High efficiency is vital, first, because of the economic and environmental value of wasted power and, second, because of the cost of dissipating the heat it generates. This high efficiency cannot be achieved by simply scaling up low-level circuits; a different approach must be adopted. This different approach is attained by using electronic devices as switches, e.g., approximating ideal closed (no voltage drop) or open (no current flow) switches. This differs from low-level digital switching circuits in that digital systems are primarily designed to deliver two distinct small voltage levels while conducting small currents (ideally zero). Power electronic circuits, though, must have the capability of delivering large currents and be able to withstand large voltages. Power can be controlled and modified by controlling the timing of repetitive switch action. Because of wear and limited switching speed, mechanical switches are ordinarily not suitable, but electronic switches have made this approach feasible into the multigigawatt power region while maintaining high-power efficiencies over wide ranges of control. However, the inherent nonlinearity of the switching action leads to the generation of transients and spurious frequencies that must be considered in the design process. Reliability of the power electronics circuits is just as important as efficiency. Modern power converter and control circuits must be extremely robust, with MTBF (mean time between failure) for typical systems in the order of 1,000,000 h of operation. Power electronic circuits are often divided into categories depending on their intended function. Converter circuits that change ac into dc are called rectifiers, circuits that change the dc operating voltage or current are called dc-to-dc converters, circuits that convert dc into ac power are called inverters, and those that change the amplitude and frequency of the ac voltage and/or current without using an intermediate dc stage are ac-to-ac converters (also called cycloconverters). Rectifiers are used in many power electronics applications because of the widespread availability of ac power sources, and rectification is often a first step in the power conditioning scheme. Rectifiers are used in very low voltage systems (e.g., 3 V logic circuits) as well as very high voltage applications of commercial utilities. The control and circuit topology can vary according to the application requirements. Dc–dc converters have many implementations that depend on the intended application, and can make use of different types of input power sources. Often, ac power is rectified and filtered to supply the requisite input dc levels. An inverter section is then used to transform the dc power to high frequency ac voltage or current, which a transformer then steps up or down. The new ac from the transformer secondary is then rectified and filtered to provide the desired output dc level. Other dc–dc converters step voltage up or down without the intervening transformer. Inverters convert dc into ac power. Many applications require the production of three-phase power waveforms for speed control of large motors used in industry. The reconstruction of single-frequency, near-sinusoidal

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.2

POWER ELECTRONICS

voltage, or current waveforms requires precisely controlled switching circuits. The exact mode and timing of the switching action in the associated power electronic devices can be complex, especially when regenerative schemes are employed to recover energy from the mechanical system and convert it back to electrical energy for more efficient operation. Inverter circuit design and control has been the subject of much research and development over the past several decades. Ac–ac power control, without changing frequency, is accomplished by simple converters that allow conduction to begin at a time past the zero-crossing of the voltage or current waveform (referred to as phase control), or more complex converters that create completely new amplitudes and frequencies for the output ac power. Note: Original contributions to this section were made by W. Newell. Portions of the material on diodes were contributed by P. F. Pittman, J. C. Engel, and J. W. Motto. D.C.

In This Section: CHAPTER 13.1 POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICE FAMILIES COMMON DEVICE CHARACTERISTICS DIODES TRANSISTORS THYRISTORS OTHER POWER SEMICONDUCTOR DEVICES BIBLIOGRAPHY

13.3 13.3 13.3 13.5 13.10 13.14 13.18 13.18

CHAPTER 13.2 NATURALLY COMMUTATED CONVERTERS INTRODUCTION BASIC CONVERTER OPERATION CONVERTER POWER FACTOR ADDITIONAL CONVERTER TOPOLOGIES REFERENCES

13.19 13.19 13.20 13.23 13.25 13.30

CHAPTER 13.3 DC-DC CONVERTERS INTRODUCTION DIRECT DC-DC CONVERTERS INDIRECT DC-DC CONVERTERS FORWARD CONVERTERS RESONANT DC-DC CONVERSION TECHNIQUES BIBLIOGRAPHY

13.31 13.31 13.31 13.37 13.41 13.45 13.49

CHAPTER 13.4 INVERTERS INTRODUCTION AN INVERTER PHASE-LEG SINGLE-PHASE INVERTERS THREE-PHASE INVERTERS MULTILEVEL INVERTERS VOLTAGE WAVEFORM SYNTHESIS TECHNIQUES CURRENT WAVEFORM SYNTHESIS TECHNIQUES INVERTER APPLICATIONS REFERENCES

13.50 13.50 13.50 13.54 13.55 13.56 13.57 13.64 13.65 13.67

CHAPTER 13.5 AC REGULATORS CIRCUITS FOR CONTROLLING POWER FLOW IN AC LOADS STATIC VAR GENERATORS REFERENCES

13.69 13.69 13.71 13.73

13.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 13.1

POWER ELECTRONIC DEVICES Jerry L. Hudgins

POWER ELECTRONIC DEVICE FAMILIES Power electronic devices have historically been separated into three broad categories: diodes, transistors, and thyristors. Modern devices can still be classified in this way, though there is increasing overlap in device design and function. Also, new materials as well as novel designs have increased the suitability and broadened the applications of semiconductor switches in energy conversion circuits and systems. Diodes are two-terminal devices that perform functions such as rectification and protection of other components. Diodes are not controllable in the sense that they will conduct current when a positive forward voltage is applied between the anode and cathode. Transistors are three-terminal devices that include the traditional power bipolar (two types of charge carriers), power MOSFETs (metal-oxide-semiconductor field-effect transistor), and hybrid devices that have some aspect of a control-FET element integrated with a bipolar structure, such as an IGBT (insulated-gate bipolar transistor). Thyristors are also three-terminal devices that have a four-layer structure (several p-n junctions) for the main power handling section of the device. All transistors and thyristor types are controllable in switching from a forward blocking state (very little current flows) into a forward conduction state (large forward current flows). All transistors and most thyristors (except SCRs) are also controllable in switching from forward conduction back to a forward blocking state. Typically, thyristors are used at the highest energy levels in power conditioning circuits because they are designed to handle the largest currents and voltages of any device technology (systems approximately with voltages above 3 kV or currents above 100 A). Many medium-power circuits (systems operating at less than 3 kV or 100 A) and particularly low-power circuits (systems operating below 100 V or several amperes) generally make use of transistors as the main switching elements because of the relative ease in controlling them. IGBTs are also replacing thyristors (e.g., GTEs) in industrial motor drives and traction applications as the IGBT voltage blocking capability improves. Diodes are used throughout all levels of power conditioning circuits and systems.

COMMON DEVICE CHARACTERISTICS A high-resistivity region of silicon is present in all power semiconductor devices. It is this region that must support the large applied forward voltages that occur when the switch is in its off state (nonconducting). The higher the forward blocking voltage rating of the device, the thicker this region must be. Increasing the thickness of this high-resistivity region results in slower turn-on and turn-off (i.e., longer switching times and/or lower frequency of operation). For example, a device rated for a forward blocking voltage of 5 kV will by its physical construction switch much more slowly than one rated for 100 V. In addition, the thicker high-resistivity region

13.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.4

POWER ELECTRONIC DEVICES 13.4

POWER ELECTRONICS

of the 5 kV device will cause a larger forward voltage drop during conduction than the 100 V device carrying the same current. There are other effects associated with the relative thickness and layout of the various regions that make up modern power devices, but the major trade-off between forward blocking voltage rating and switching times and between forward blocking voltage and forward voltage drop during conduction should be kept in mind. Another physical aspect of the semiconductor material is that the maximum breakdown voltage achievable using a semiconductor is proportional to the energy difference between the conduction and valence bands (bandgap). Hence, a material with a larger bandgap energy than that in silicon (Si) can in principle achieve the same blocking voltage rating with a thinner high-resistivity region. This is one of the reasons that new semiconductor devices are being designed and are recently becoming available in materials such as silicon carbide (SiC). The time rate of rise of device current (di/dt) during turn-on and the time rate of rise of device voltage (dv/dt) during turn-off are important parameters to control for ensuring proper and reliable operation. Many power electronic devices have maximum limits for di/dt and dv/dt that must not be exceeded. Devices capable of conducting large currents in the on-state are necessarily made with large surface areas through which the current flows. During turn-on, localized regions of a device begin to conduct current. If the local current density becomes too large, then heating will damage the device. Sufficient time must be allowed for the entire area to begin conducting before the localized currents become too high and the device’s di/dt rating is exceeded. The circuit designer sometimes adds series inductance to limit di/dt below the recommended maximum value. During turn-off, current is decreasing while voltage across the device is increasing. If the forward voltage becomes too high while sufficient current is still flowing, then the device will drop back into its conduction mode instead of completing its turn-off cycle. Also, during turn-off, the power dissipation can become excessive if the current and voltage are simultaneously too large. Both of these turn-off problems can damage the device as well as other portions of the circuit. Another problem that occurs is associated primarily with thyristors. Thyristors can self-trigger into a forward conduction mode from a forward blocking mode if their dv/dt rating is exceeded (because of excessive displacement current through parasitic capacitances). Protection circuits, known as snubbers, are used with power semiconductor devices to control dv/dt. The snubber circuit specifically protects devices from a large di/dt during turn-on and a large dv/dt during turn off. A general snubber topology is shown in Fig. 13.1.1. The turn-on snubber is made by inductance L1 (often L1 is stray inductance only). This protects the device from a large di/dt during the turn-on process. The auxiliary circuit made by R1 and D1 allows the discharging of L1 when the device is turned off. The turn-off snubber is made by resistor R2 and capacitance C2. This circuit protects the power electronic device from large dv/dt during the turn-off process. The auxiliary circuit made by D2 and R2 allows the discharging of C2 when the device is turned on. The circuit of capacitance C2 and inductance L1 also limits the value of dv/dt across the device during forward blocking. In addition, L1 protects the device from reverse overcurrents. All power electronic devices must be derated (e.g., power dissipation levels, current conduction, voltage blocking, and switching frequency must be FIGURE 13.1.1 Turn-on (top elements) and turn-off (bottom reduced), when operating above room temperature elements) snubber circuits for typical power electronic devices.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.5

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

13.5

(defined as about 25°C). Bipolar-type devices have thermal runway problems, in that if allowed to conduct unlimited current, these devices will heat up internally causing more current to flow, thus generating more heat, and so forth until destruction. Devices that exhibit this behavior are pin diodes, bipolar transistors, and thyristors. MOSFETs must also be derated for current conduction and power dissipation when heated, but they do not suffer from thermal runaway as other device types do. IGBTs fall in between the behavior of MOSFETs and bipolar transistors. At low current levels they behave similar to bipolar transistors, whereas operating at high currents causes them to behave more like MOSFETs. There are many subtleties of power device fabrication and design that are made to improve the switching times, forward voltage drop during conduction, dv/dt, di/dt, and other ratings. Many of these improvements cause the loss of the ability of the device to hold-off large applied reverse voltages. In other devices the inherent structure itself precludes reverse blocking capability. In general, only some versions of two types of thyristors have equal (symmetric) forward and reverse voltage hold-off capabilities: GTOs (gate turn-off thyristor) and SCRs (silicon controlled rectifier). A simple diagram of the internal structure of the major power semiconductor devices, the corresponding circuit symbols, some simple equivalent circuits, and a summary of the principal characteristics of each device are shown in Fig. 13.1.2. A comparison between types of devices illustrating the useable switching frequency range and switched power capability is shown in Fig. 13.1.3. Switched power capability is defined here as the maximum forward hold-off voltage obtainable multiplied by the maximum continuous conduction current. Further information on power electronic devices can be obtained from manufacturer’s databooks and applications notes, textbooks (Baliga, 1987, 1996; Ghandi, 1977; Sze, 1981), and in many technical journal publications (including Azuma and Kurata, 1988; Hower, 1988; Hudgins, 1993).

DIODES Diode Types Schottky and pin diodes are used extensively in power electronic circuits. Schottky diodes are formed by placing a metal layer directly on a lightly doped (usually n-type) semiconductor. The naturally occurring potential barrier at the metal-semiconductor interface gives rise to the rectifying properties of the device. A pin diode is a pn-junction device with a lightly doped (near intrinsic) region placed between the typical diode p- and n-type regions. The lightly doped region is necessary to support large applied reverse voltages. The diode characteristic is such that current easily flows in one direction while it is blocked in the other. Power Schottky diodes are limited to about 200 V reverse blocking capability because the forward voltage drop becomes excessive in the high-resistivity region of the semiconductor and the lowering of the interface potential barrier (and associated increase in reverse leakage current) owing to the applied reverse voltage also increases (Sze, 1981). However, new Schottky structures made from SiC material are commercially available that have much higher voltage blocking capability. It is likely that these SiC diodes will be available with multi-kV ratings soon. Reverse blocking of up to 10 kV is obtainable with a pin structure in Si. These types of diodes can easily handle surge currents of tens of thousands of amperes and rms currents of several thousand amperes. The pin diode has the advantages of much higher voltage and current capabilities than the Schottky diode, though the new SiC Schottky diodes are moving into higher power ratings all the time. Also, pin diodes are inherently slower in switching speed than Schottky devices, and for low reverse-blocking values, they have a larger forward voltage drop than Schottky diodes. For devices rated for 50 V reverse blocking, a Schottky diode has a forward drop of about 0.6 V as compared to a pin diode’s forward drop of about 0.9 V at the same current density. The fast switching of the Schottky structure can be used to advantage in highfrequency power converter circuits. The Schottky’s low forward drop can be used to advantage on the output rectifiers of low-voltage converter circuits, also. There have been several structures proposed that merge the features of the pin (for high reverse-blocking capability) and Schottky (for fast switching and low forward drop) diodes (Hower, 1988; Baliga, 1987). This concept is beginning to be implemented into the newer SiC diodes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.6

POWER ELECTRONIC DEVICES 13.6

POWER ELECTRONICS

Structure

Circuit Symbol

Anode

pin diode

Characteristics

High forward drop, high voltage blocking, slow turn-off.

Cathode

p

i

n

K

A

Schottky diode Si, SiC Anode

K

A

Cathode Fast switching, low voltage blocking.

n Metal Source

MOSFET N-channel enhancement mode

Gate Channel

n+

Oxide Channel

n+

Highest frequency range of operation of any controllable device, low-power gate signal required, good temperature characteristics, resistive forward drop.

p+

n-

D

Body doide n+

G S Drain MOSFET Superjunction C S

D

G n+ n+ p

n+ n+ p

n+

n+ p

n+ n+ p

p n

Similar to N-channel enhancement mode.

p n p

G

n

n

S

n+

n+

D

FIGURE 13.1.2 Commonly used power electronic devices showing their circuit symbols, simplified internal structures, equivalent circuits (where appropriate), and some major characteristics.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.7

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

Circuit Symbol

Equivalent Circuit

Structure

13.7

Characteristics

Emitter IGBT

Gate

Oxide n+ p

N-channel, punch-through with lateral gate.

Collector

n+ p

Good behavior with respect to temperature variations, moderately good range of operating frequencies, moderately good forward drop characteristics, good SOA characteristics

p+

C

Gate

n−

n+

p+

G

Emitter E Collector

Emitter Gate

Oxide n+ p

N-channel, nonpunch-through with lateral gate.

n+ p p+

Similar to above.

n−

p+

Collector FIGURE 13.1.2 (Continued )

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.8

POWER ELECTRONIC DEVICES 13.8

POWER ELECTRONICS

Circuit Symbol

Equivalent Circuit

Structure

N+

Characteristics

N+

N-channel, punch-through, trench-gate IGBT with transparent emitter

P

Similar to above N− N+ P+

Circuit Symbol

Structure

SCR A G

Equivalent Circuit

Anode

Forward and reverse voltage blocking capability, very high voltage and current ratings avaliable, low forward drop, no turn-Off control.

Anode

p+ K

Characteristics

pnp

n

GTO A

p

Gate

npn

n+ G

K

Cathode

Anode oxide

p-type MCT A G

Similar to SCR except controllable turn-off capability.

Cathode

oxide

p p+ p n p− n+ p

Anode

p-channel On-FET Gate

D S

Off-FET

S D

Metal

K

Cathode

On-FET

No reverse blocking, low-power gate signals for control, controllable -on and -off, low forward drop.

Cathode

FIGURE 13.1.2 (Continued )

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.9

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

13.9

FIGURE 13.1.3 Comparison between major power electronic devices of their maximum switched power capability in terms of associated switching frequency. The switched power refers to the maximum forward blocking voltage multiplied by the maximum forward conduction current that each device is capable of handling.

Diode Ratings* Silicon diode ratings include voltage, current, and junction temperature. A list of some of the more important parameters is shown in Table 13.1.1. The device current rating IF is primarily determined by the area of the silicon die, power dissipation, and the method of heat sinking, while the spread of voltage ratings VRRM is determined by silicon resistivity and die thickness. Reverse voltage ratings are designated as repetitive VRRM and nonrepetitive VRSM. The repetitive value pertains to steady-state operating conditions, while the nonrepetitive peak value applies to occasional transient or fault conditions. Care must be exercised when applying a device to ensure that the voltage rating is never exceeded, even momentarily. When the blocking capability of a conventional diode is exceeded, leakage currents flow through the localized areas at the edge of the crystal. The resulting localized heating can cause rapid device failure. Although even low-energy reverse overvoltage transients are likely to be destructive, the silicon diode is remarkably rugged with respect to forward current transients. This property is demonstrated by the IFSM rating that permits one-half-cycle peak surge current of over ten times the IF rating. For shorter current pulses, less than 4 ms, the surge current is specified by an I 2t rating similar to that of a fuse. Proper circuit design must ensure that the maximum average junction temperature will never exceed its design limit of typically 150°C. Good design practice for high reliability, however, limits the maximum junction temperature to a lower value. The average junction-temperature rise above ambient is calculated by multiplying the average power dissipation, given approximately by the product of VF and IF, by the thermal resistance RqJC. Transient junction temperatures can be computed from the transient thermal-impedance curve. Device ratings are normally specified at a given case temperature and operating frequency. The proper use of a device at other operating conditions requires an appreciation of certain basic device characteristics. *Major

portions of this subsection were originally contributed by P. F. Pittman, J. C. Engel, and J. W. Motto.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.10

POWER ELECTRONIC DEVICES 13.10

POWER ELECTRONICS

TABLE 13.1.1 Symbols for Some Diode Ratings and Characteristics Maximum Ratings VRRM VRSM IF(RMS) IF(AV) IFSM I2t Tj

Peak repetitive reverse voltage Peak nonrepetitive reverse voltage RMS forward current Average forward current Surge forward current Nonrepetitive pulse overcurrent capability Junction temperature

Characteristics VF IR tRR RqjC

Forward voltage drop (at specified temperature and forward current) Maximum reverse current (at specified temperature and reverse voltage) Reverse recovery time (under specified switching of forward and reverse currents) Junction-to-case thermal resistance

This is especially true in applications where the operating conditions of a number of devices are interdependent, as in series and parallel operation. For example, the forward voltage drop of a silicon diode has a negative temperature coefficient of 2 mV/°C for currents below the rated value. This variation in forward drop must be considered when devices are to be operated in parallel. The reverse blocking voltage of a diode, at a specified reverse current, effectively decreases with an increase in temperature. The tendency to decrease comes from the fact that the reverse leakage current of a junction increases with temperature, thereby decreasing the voltage attained at a given measuring-current level. If the leakage current is very low, the maximum reverse voltage will be determined by avalanche breakdown (which has a coefficient of approximately 0.1 percent per °C in silicon). Τhus, the voltage required to cause avalanche actually increases as the temperature rises. It should be noted that the reverse blocking voltage of a conventional diode is usually determined by imperfections at the edge of the die, and thus an ideal avalanche breakdown is usually not observed. Τhe reverse recovery time of a diode causes its performance to degrade with increasing frequency. Because of this effect, the rectification efficiency of a conventional diode used in a power circuit at high frequency is poor. In order to serve this application, a family of fast-recovery diodes has been developed. The stored charge of these devices is low, with the result that the amplitude and duration of the sweep-out current are greatly reduced compared with those of a conventional diode. However, improved turnoff characteristics of the fastrecovery diodes are obtained at some sacrifice in blocking voltage and forward drop compared with a conventional diode.

TRANSISTORS Power MOSFETs MOSFETs and IGBTs have an insulating oxide layer separating the gate contact and the silicon substrate. This insulating layer provides a large effective input resistance so that the control power necessary to switch these devices is considerably lower than that for a comparable bipolar transistor. The oxide layer also makes MOSFETs and IGBTs subject to damage from electrostatic charge build-up at the gate so that care must be exercised in their handling. Because of the internal structure of the power MOSFET, a pn junction (referred to as the “body diode”) is present that conducts when a reverse voltage is applied across the drain and source. Power MOSFETs do not suffer from second breakdown as bipolar transistors do and generally switch much faster, particularly during turn-off. Power MOSFETs have a large, voltage-dependent, effective input capacitance (combination of the gate-to-source and gate-to-drain capacitances) that can interact with stray circuit inductance in the gate-drive circuit to create oscillations. An external, small-valued resistor is usually placed

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.11

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

13.11

in series with the gate lead to damp the oscillatory behavior. Even with a fairly large input capacitance, power MOSFETs can be made to turn on and off faster than any other type of power electronic device. Power MOSFETs are enhancement-type devices; a nonzero gate-to-source voltage must be applied to form a conducting channel between the drain and source to allow external current to flow. N-channel MOSFETs require a positive applied voltage at the gate with respect to the source for turn-on, while p-channel MOSFETs require a negative gate-source voltage. The gate electrode must be externally shorted to the source electrode for the device to support the maximum drain-source voltage VDS, and keep the device in its forward-blocking mode. Drain current will flow if the gate-source voltage VGS is above some minimum value (threshold voltage VGS(TH)) necessary to form the conducting channel between the drain and source. In the saturated-mode of operation (i.e., drain current ID, primarily dependent only on the gate-source voltage VGS) the most important device characteristic is the forward transconductance gfs usually specified by the manufacturer with a graph showing the value as a function of ID. The linear-mode of operation is preferred for switching applications. Here, VGS is typically in the range of 10 to 20 V. In this mode, ID is approximately proportional to the applied VDS for a given value of VGS. The proportionality constant defines the on-resistance rDS(ON). The on-resistance is the total resistance between the source and drain electrodes in the on-state and it determines the maximum ID rating (based on power dissipation restrictions). As temperature increases, the ability of charge to move through the conduction channel from source to drain decreases. The effect appears as an increase in rDS(ON). The increase in rDS(ON) as a function of temperature goes approximately as T2.3. Because of the positive temperature exponent, power MOSFETs can be operated in parallel, for increased current capacity, with relative ease. In addition, the safe operating area (SOA) of MOSFETs is relatively large and the devices can be operated reliably near the SOA limits. Power MOSFETs can be obtained with a forward voltage hold-off capability BVDSS of around 1.2 kV (nchannel) and current handling capacity of up to 100 A at lower BVDSS values. P-channel devices typically have less spread in ratings and are generally not available in extremes of current handling or hold-off voltage values like n-channel devices. MOSFETs can be obtained as discretely packaged parts or with several die configured together to form various half-bridge or full H-bridge topologies in a module. Advanced MOSFETs have integrated features that provide capabilities such as current limiting, voltage clamping, and current sensing for more intelligent system design. Trench- or buried-gate technology has contributed to the reduction of the Ron × Area product in power MOSFETs (and IGBTs) by a factor of 3 or more compared to surface gate devices (see Fig. 13.1.2). The trench-gate technology has been further adapted into the newest structure called the Superjunction MOSFET (see Fig. 13.1.2). The horizontal distribution of alternating p- and n-regions modifies the electric field distribution in the forward blocking mode such that the n-regions can be designed with a smaller vertical dimension, for the same blocking capability, as the trench-gate structure. Hence, the shorter current path causes the forward drop to be greatly reduced during conduction. At 100 A/cm2 the SJ-MOSFET has been shown to have a forward drop of 0.6 V as compared to 0.9 V for the traditional MOSFET (Fujihira, 1998). Table 13.1.2 lists some of the more important power MOSFET ratings and characteristics. IGBTs. Insulated-gate bipolar transistors are designated as n-type or p-type. The n-type of device dominates the marketplace because of its ease of use (it is controlled by a positive gate-emitter voltage). The n-type device can be thought of as an n-channel enhancement-mode MOSFET controlling the base current of a pnp bipolar transistor, as shown in Fig. 13.1.2. The naming convention is somewhat confusing because the external leads are labeled with the idea of an IGBT being a direct replacement for an npn transistor with a gate lead replacing the base lead (i.e., the emitter of the equivalent pnp, in Fig. 13.1.2, is the collector of the IGBT, and so forth). Applying a positive gate voltage above the threshold value, VGE(TH), turns the IGBT on. For switching applications, VGE is typically in the range of 10 to 20 V. The IGBT has a saturated mode of operation (similar to a MOSFET), where the collector current is relatively independent of collector-to-emitter voltage VCE. The base-collector junction, of the equivalent pnp, can never become forward biased because of drain current flow through the equivalent MOSFET. Therefore, the IGBT always has a forward drop, during conduction, of at least one pn junction (typically around 1 V). This is why the forward voltage drop VCE(ON) of the IGBT is greater than a comparable bipolar transistor, but less than a pure MOSFET structure at rated current flow. The switching times of the IGBT are shorter than comparable bipolar transistors (resulting in higher frequency of operation) and are not as susceptible to failure modes as are bipolars. The turn-off of an IGBT is characterized by two distinct portions of its current waveform. The first portion is characterized by a steep drop associated with the interruption of base-current to the equivalent pnp transistor (i.e., the internal MOSFET turns off). The second

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.12

POWER ELECTRONIC DEVICES 13.12

POWER ELECTRONICS

TABLE 13.1.2 Symbols for Some MOSFET Ratings and Characteristics Maximum Ratings VDS ID IDM Tj PD

Drain-source voltage Continuous drain current Pulsed drain current Junction temperature Maximum power dissipation

Characteristics BVDSS VGS(TH) IDSS ID(on) rDS(ON) gfs CISS COSS CRSS td(on) tr td(off ) tf Qg Qgs Qgd LD LS RqJC

Drain-source breakdown voltage Gate threshold voltage Zero gate-voltage drain current On-state drain current Static drain-source on-state resistance Common-source forward transconductance Input capacitance Output capacitance Reverse transfer capacitance Turn-on delay time Rise time Turn-off delay time Fall time Total gate charge (gate-source + gate-drain) Gate-source charge Gate-drain (“Miller”) charge Internal drain inductance Internal source inductance Junction-to-case thermal resistance

Body Diode Ratings IS ISM VSD trr QRR tON

Continuous source current Pulse source current Diode forward voltage drop Reverse recovery time Reverse recovered charge Forward turn-on time

portion is known as the current-tail and can be very long in time. This is associated with final turn-off of the bipolar transistor structure. Much of the IGBT design efforts are aimed at modifying this current-tail to control switching time and/or power dissipation during turn-off. If a large collector current is allowed to flow, the self-heating can cause the internal parasitic thyristor structure to latch into conductance (the gate thus loses the ability to turn the device off). This behavior is known as the short-circuit, shoot-through, or latch-up current limit. The maximum current that can flow (limited only by the device impedance), before latch-up occurs, must usually be limited to less than 10 ms duration. The behavior as a function of temperature is complicated for IGBTs. At low collector current values, the forward drop dependency as a function of temperature is similar to bipolar transistors. At high collector current values, the forward drop dependency on temperature is closer to that of a MOSFET. The exact design and fabrication steps used in the production of the device plays a strong role in the exact behavior because of temperature changes. Further details are available from Baliga (1987) and Hefner (1992). IGBTs can now be obtained with hold-off voltage ratings of up to 6.3 kV and pulsed forward current capability of over 200 A. These devices can be obtained as discrete components or with several parallel die (to form one switch) and then several sets of switches configured into bridge or half-bridge topologies in modules. They are also

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.13

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

13.13

available with an integrated current sensing feature for active monitoring of device performance. There are two types of IGBT designs—punch through (PT) and non-punch through (NPT). NPT structures have no n+ buffer layer next to the p+ emitter (see Fig.13.1.2). This means that the applied forward blocking voltage can extend the associated depletion region all the way across the n– base causing breakdown at the p-emitter/n-base junction if the applied voltage is high enough. In a PT structure (shown in Fig. 13.1.2) the depletion region is pinned to the n+ buffer layer, thus allowing a thinner n– base (high-resistivity region) to be used in the device design. Previous generation IGBTs have a punch-through structure designed around a p+ Si substrate with two epitaxial regions (n− base region and n+ buffer layer). Carrier lifetime reduction techniques are often used in the drift region to modify the turn-off characteristics. Recently, trench-gate devices have been designed with local lifetime control in the buffer layer (Motto, 1998). High-voltage devices (>1.2 kV) have been created using a non-punchthrough structure beginning with the n− base region as the substrate upon which a shallow (transparent) p+ emitter is formed (Cotorogea, 2000). Cross-sections of typical unit cells for planar-gate IGBTs are shown in Fig. 13.1.2. Third-generation IGBTs make use of improved cell density and shallow diffusion technologies that create fast switching devices with lower forward drops than have been achieved with previous devices. These lateral channel structures have nearly reached their limit for improvements. New trench-gate technologies offer the promise of greatly improved operation (Santi, 2001). Trench technologies can create an almost ideal IGBT structure because it connects in series the MOSFET and a p-n diode. There is no parasitic JFET as is created by the diffused p-wells in a lateral channel device (see Fig. 13.1.2). A simplified cross-section of the trenchgate IGBT is shown in Fig. 13.1.2. The forward drop in a trench-gate device is reduced significantly from the value in a third-generation lateral-gate IGBT. For example, in devices rated for 100 A and 1200 V, the forward drop, VCE, is 1.8 V in a trench-gate IGBT as compared to 2.7 V in a lateral-gate (third generation) IGBT at the same current density, gate voltage, and temperature (Motto, 1998.) Local lifetime control is obtained in the n+ base layer by using proton irradiation. This helps decrease the effective resistance in the n– base by increasing the on-state carrier concentration. The surface structure of the gate is such that the MOS-channel width is increased (causing a decrease in channel resistance). The trend is for devices to be of the PT type as processing technology is improved. Table 13.1.3 lists some of the more important IGBT ratings and characteristics. Bipolar Transistors. Power bipolar transistors and bipolar Darlingtons are seldom used in modern converter systems because of the amount of power required by the control signal and the limited SOA of the traditional

TABLE 13.1.3 Symbols for Some IGBT Ratings and Characteristics Maximum Ratings VCES VCGR VGE IC Tj PD

Collector-emitter voltage Collector-gate voltage Gate-emitter voltage Continuous collector current Junction temperature Maximum power dissipation

Characteristic BVCES VGE(TH) ICES VCE(ON) QG(ON) tD(ON) tRl tD(OFF) tFl WOFF RqJC

Collector-emitter breakdown voltage Gate threshold voltage Zero gate-voltage collector current (at specified Tj and VCE value) Collector-emitter on-voltage (at specified Tj, IC, and VGE values) On-state gate charge (at specified IC and VCE values) Turn-on delay time (for specified test) Rise time (for specified test) Turn-off delay time (for specified test) Fall time (or specified test) Turn-off energy loss per cycle Junction-to-case thermal resistance

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.14

POWER ELECTRONIC DEVICES 13.14

POWER ELECTRONICS

power bipolar transistor. Because of their declining use, no further discussion will be given. Further details should be obtained from manufacturers’ databooks.

THYRISTORS There are four major types of thyristors: (i) silicon-controlled rectifier (SCR), (ii) gate turn-off (GTO) thyristor, (iii) MOS-controlled thyristor (MCT) and related forms, and (iv) static induction thyristor (SITh). MCTs are so-named because many parallel enhancement-mode, MOSFET structures of one charge type are integrated into the thyristor for turn-on and many more MOSFETs of the other charge type are integrated into the thyristor for turn-off. A static induction thyristor (SITh), or field-controlled thyristor (FCTh), has essentially the same construction as a power diode with a gate structure that can pinch-off anode current flow. The advantage of using MCTs, derivative forms of the MCT, or SIThs is that they are essentially voltage-controlled devices, (e.g., little control current is required for turn-on or turn-off) and therefore require simplified control circuits attached to the gate electrode (Hudgins, 1993). Less important types of thyristors include the Triac (a pair of low-power, anti-parallel SCRs integrated together to form a bi-directional current switch) and the programmable unijunction transistor (PUT). A thyristor used in some ac power circuits (50 or 60 Hz in commercial utilities or 400 Hz in aircraft) to control ac power flow can be made to optimize internal power loss at the expense of switching speed. These thyristors are called phase-control devices, because they are generally turned from a forward-blocking into a forward-conducting state at some specified phase angle of the applied sinusoidal anode-cathode voltage waveform. A second class of thyristors is used in association with dc sources or in converting ac power at one amplitude and frequency into ac power at another amplitude and frequency, and must generally switch on and off relatively quickly. The associated thyristors used are often referred to as inverter thyristors. SCRs and GTOs. The voltage hold-off ratings for SCRs and GTOs is above 6 kV and continuing development will push this higher. The pulsed current rating for these devices is easily tens of kiloamperes. A gate signal of 0.1 to 100 A peak is typical for triggering an SCR or GTO from forward blocking into forward conduction. These thyristors are being produced in silicon with diameters greater than 100 mm. The large wafer area places a limit on the rate of rise of anode current, and hence a di/dt limit (rating) is specified. The depletion capacitances around the pn junctions, in particular the center junction, limit the rate of rise in forward voltage that can be applied even after all the stored charge, introduced during conduction, is removed. The associated displacement current under application of forward voltage during the thyristor blocking state sets a dv/dt limit. Some effort in improving the voltage hold-off capability and over-voltage protection of conventional SCRs is underway by incorporating a lateral high-resistivity region to help dissipate the energy during breakover. Most effort, though, is being placed in the further development of high-performance GTO thyristors because of their controllability and to a lesser extent in optically triggered structures that feature gate circuit isolation. Optically gated thyristors have traditionally been used in power utility applications where series stacks of devices are necessary to achieve the high voltages required. Isolation between gate-drive circuits for circuits such as static VAR compensators and high voltage dc to ac inverters have driven the development of this class of devices. One of the most recent devices can block 6 kV forward and reverse, conduct 2.5 kA average current, and maintain a di/dt capability of 300 A/ms and a dv/dt capability of 3000 V/ms, with a required trigger power of 10 mW. High-voltage GTO thyristors with symmetric blocking capability require thick n-base regions to support the high electric field. The addition of an n+ buffer-layer next to the p+-anode allows high voltage blocking capability and yet produces a low forward voltage drop during conduction because of the thinner n−-base required. Many other design modifications have been introduced by manufacturers so that GTOs with a forward blocking capability of around 8 kV and anode conduction of 1 kA have been produced. Also, a reverse conducting GTO has been fabricated that can block 6 kV in the forward direction, interrupt a peak current of 3 kA, and has a turn-off gain of about 5. A modified GTO structure, called a gate-commutated thyristor (GCT), has been designed and manufactured that commutates all of the cathode current away from the cathode region and diverts it out the gate contact. The GCT is similar to a GTO in structure except that it has a low-loss n-buffer region between the n-base and

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.15

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

13.15

p-emitter. The GCT device package is designed to result in very low parasitic inductance and is integrated with a specially designed gate-drive circuit (IGCT). The specially designed gate drive and ring-gate package circuit allows the GCT to be operated without a snubber circuit and switch with higher anode di/dt, than a similar GTO. At blocking voltages of 4.5 kV and higher, the IGCT seems to provide better performance than a conventional GTO. The speed at which the cathode current is diverted to the gate (diGQ /dt) is directly related to the peak snubberless turn-off capability of the GCT. The gate-drive circuit can sink current for turn-off at diGQ /dt values in excess of 7000 A/ms. This hard gate drive results in a low-charge storage time of about 1 ms. The low storage time and the fail-short mode makes the IGCT attractive for high-voltage series applications. The bi-directional control thyristor (BCT) is an integrated assembly of two anti-parallel thyristors on one Si wafer. The intended application for this switch is VAR compensators, static switches, soft starters, and motor drives. These devices are rated up to 6.5 kV blocking. Cross-talk between the two halves has been minimized. The small gate-cathode periphery necessarily restricts the BCT to low-frequency applications because of its di/dt limit. The continuing improvement in GTO performance has caused a decline in the use of SCRs, except at the very highest power levels. In addition, the improvement in IGBT design further reduces the attractiveness of SCRs. These developments make the future use of SCRs seemingly diminish. MCTs. There is a p-channel and an n-channel MOSFET integrated into the MCT, one FET-structure for turnon and one for turn-off. The MCT itself comes in two varieties: p-type (gate voltage applied with respect to the anode) and an n-type (gate voltage applied with respect to the cathode). Just as in a GTO, the MCT has a maximum controllable cathode current value. The inherent optimization for good switching and forward conduction characteristics make the MCT unable to block reverse applied voltages. MCTs are presently limited to operation at medium power levels. The seeming variability in fabrication of the turn-off FET structure continues to limit the performance of MCTs, particularly current interruption capability, though these devices can handle two to five times the conduction current density of IGBTs. All MCT device designs center on the problem of current interruption capability. Turn-on is relatively simple, by comparison, with it and conduction properties approaching the one-dimensional thyristor limit. Other types of integrated MOS-thyristor structures can be operated at high power levels, but these devices are not commonly available or are produced for specific applications. Typical MCT ratings are for 1 kV forward blocking and a peak controllable current of 75 A. A recent version of the traditional MCT design is a diffusion-doped (instead of the usual epitaxial growth) device. They are rated for 3 kV forward blocking, have a forward drop of 2.5 V at 100 A, and are capable of interrupting around 300 A with a recovery time of 5 ms. An MCT that uses trench-gate technology, called a depletion-mode thyristor (DMT), has been designed. A similar device is the base resistance controlled thyristor (BRT). Here, a p-channel MOSFET is integrated into the n-drift region of the MCT. These devices operate in an “IGBT” mode until the current is large enough to cause the thyristor structure to latch. Another new MCT-type structure is called an emitter switched thyristor (EST), and uses an integrated lateral MOSFET to connect a floating thyristor n-emitter region to an n+ thyristor cathode region. All thyristor current flows through the lateral MOSFET so it can control the thyristor current. Integrating an IGBT into a thyristor structure has been proposed. One device, called an IGBT triggered thyristor (ITT), is similar in structure and operation to the EST. The best designed EST, however, is the dual gate emitter switched thyristor (DGEST). The device has two gate electrodes. One gate controls an integrated IGBT section, while the other gate controls a thyristor section. The DG-EST is intended to be switched in IGBT mode, to exploit the controllability and snubberless capabilities of an IGBT. During forward conduction, the thyristor section takes over and thus the DG-EST takes advantage of a low forward drop and the latching nature of a thyristor. Static Induction Thyristors. A static induction thyristor (SITh) or field controlled thyristor (FCTh) is essentially a pin diode with a gate structure that can pinch-off anode current flow. High-power SIThs have a subsurface gate (buried-gate) structure to allow larger cathode areas to be used, and hence larger current densities can be conducted. Other SITh configurations have surface gate structures. Planar gate devices have been fabricated with blocking capabilities of up to 1.2 kV and conduction currents of 200 A, while step-gate (trench-gate) structures have been produced that are able to block up to 4 kV and conduct 400 A. Similar devices with a “Verigrid” structure have been demonstrated that can block 2 kV and conduct 200 A, with claims of up to 3.5 kV blocking and 200 A conduction. Buried gate devices that block 2.5 kV and conduct 300 A have also been fabricated.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.16

POWER ELECTRONIC DEVICES 13.16

POWER ELECTRONICS

An integrated light-triggered and light-quenched SITh has been produced that can block 1.2 kV and conduct up to 20 A (at a forward drop of 2.5 V). This device is an integration of a normally off buried-gate static induction photothyristor and a normally off p-channel Darlington surface-gate static induction phototransistor. The optical trigger and quenching power required is less than 5 and 0.2 mW, respectively. Thyristor Behavior. The thyristor is a three-terminal semiconductor device comprising four layers of silicon so as to form three separate pn junctions. In contrast to the linear relation that exists between load and control currents in a transistor, the thyristor is bistable. The four-layer structure of the thyristor is shown in Fig. 13.1.2. The anode and cathode terminals are connected in series with the load to which power is to be controlled. The thyristor is turned on by application of a low-power control signal between the third terminal, or gate, and the cathode (between gate and anode for p-type MCT). The reverse characteristic is determined by the outer two junctions, which are reverse-biased in this operating mode. With zero gate current, the forward characteristic in the off- or blocking-state is determined by the center junction, which is reverse biased. However, if the applied voltage exceeds the forward blocking voltage, the thyristor switches to its on- or conducting state. The effect of gate current is to lower the blocking voltage at which switching takes place. This behavior can be explained in terms of the two-transistor analog shown in Fig. 13.1.2. The two transistors are regeneratively coupled so that if the sum of their current gains (a’s) exceeds unity, each drives the other into saturation. In the forward blocking-state, the leakage current is small, both a’s are small, and their sum is less than unity. Gate current increases the current in both transistors, increasing their a’s. When the sum of the two a’s equals 1, the thyristor switches to its on-state (latches). The form of the gate-to-cathode VI characteristic of SCRs and GTOs is similar to that of a diode. With positive gate bias, the gate-cathode junction is forward-biased and permits the flow of a large current in the presence of a low voltage drop. When negative gate voltage is applied to an SCR, the gate-cathode junction is reverse-biased and prevents the flow of current until the avalanche breakdown voltage is reached. In a GTO, a negative gate voltage is applied to provide a low impedance for anode current to flow out of the device instead of out of the cathode. In this way the cathode region turns off, thus pulling the equivalent npn transistor out of conduction. This causes the entire thyristor to return to its blocking state. The problem with the GTO is that the gate-drive circuitry is typically required to sink from 5 to 10 percent of the anode current to achieve turn-off. The MCT achieves turn-off by internally diverting current through an integrated MOSFET. Switching the equivalent MOSFET only requires a voltage signal to be applied at the gate electrode. A summary is provided in Table 13.1.4 of some of the ratings which must be considered when choosing a thyristor for a given application. Both forward and reverse repetitive and nonrepetitive voltage ratings must be considered, and a properly rated device must be chosen so that the maximum voltage ratings are never exceeded. In most cases, either forward or reverse voltage transients in excess of the nonrepetitive maximum ratings result in destruction of the device. The maximum rms or average current ratings given are usually those which cause the junction to reach its maximum rated temperature. Because the maximum current will depend on the current waveform and on thermal conditions external to the device, the rating is usually shown as a function of case temperature and conduction angle. The peak single half-cycle surge-current rating must be considered, and in applications where the thyristor must be protected from damage by overloads, a fuse with an I 2t rating smaller than the maximum rated value for the device must be used. Maximum ratings for both forward and reverse gate voltage, current, and power also must not be exceeded. The maximum rated operating junction temperature TJ must not be exceeded, since device performance, in particular voltage-blocking capability, will be degraded. Junction temperature cannot be measured directly but must be calculated from a knowledge of steady-state thermal resistance RqJC and the average power dissipation. For transients or surges, the transient thermal impedance (ZqJC) curve must be used. The maximum average power dissipation PT is related to the maximum rated operating junction temperature and the case temperature by the steady-state thermal resistance. In general, both the maximum dissipation and its derating with increasing case temperature are provided. The number of thyristor characteristics specified varies widely from one manufacturer to another. Some characteristics are given only as typical values of minima or maxima, while many characteristics are displayed graphically. Table 13.1.4 summarizes some of the characteristics provided. Thyristor types shown in parentheses

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.17

POWER ELECTRONIC DEVICES POWER ELECTRONIC DEVICES

13.17

TABLE 13.1.4 Symbols for Some Thyristor Ratings and Characteristics Maximum Ratings VRRM VRSM (SCR & GTO) VDRM VDSM (SCR & GTO) IT(RMS) IT(AV) (IK for MCT) ITSM (IKSM for MCT) ITGO (IKC for MCT) I2t (SCR & GTO) PT (MCT) di/dt dv/dt PGM(PFGM for GTO) PRGM (GTO) VFGM VRGM IFGM (SCR & GTO) IRGM (GTO) Tj

Peak repetitive reverse voltage Peak nonrepetitive reverse voltage Peak repetitive forward off-state voltage Peak nonrepetitive forward off-state voltage RMS forward current Average forward current Surge forward current Peak controllable current Nonrepetitive pulse overcurrent capability Maximum power dissipation Critical rate of rise of on-state current Critical rate of rise of off-state voltage Peak gate forward power dissipation Peak gate reverse power dissipation Peak forward gate voltage Peak reverse gate voltage Peak forward gate current Peak reverse gate current Junction temperature

Characteristics VTM IDRM IRRM CISS (MCT) VGT (SCR & GTO) VGD (SCR & GTO) IGT (SCR & GTO) tgt (GTO) tq (SCR & GTO) tD(ON) (MCT) t Rl (MCT) tD(OFF) (MCT) tFl (MCT) WOFF (MCT) RqJC

On-state voltage drop (at specified temperature and forward current) Maximum forward off-state current (at specified temperature and forward voltage) Maximum reverse blocking current (at specified temperature and reverse voltage) Input capacitance (at specified temperature and gate and anode voltages) Gate trigger voltage (at specified temperature and forward applied voltage) Gate nontrigger voltage (at specified temperature and forward applied voltage) Gate trigger current (at specified temperature and forward applied voltage) Turn-on time (under specified switching conditions) Turn-off time (under specified switching conditions) Turn-on delay time (for specified test) Rise time (for specified test) Turn-off delay time (for specified test) Fall time (for specified test) Turn-off energy loss per cycle Junction-to-case thermal resistance

indicate a characteristic unique to that device or devices. Gate conditions of both voltage and current to ensure either nontriggered or triggered device operation are included. The turn-on and turn-off transients of the thyristor are characterized by switching times like the turn-off time listed in Table 13.1.4. The turn-on transient can be divided into three intervals—gate-delay interval, turnon of initial area, and spreading interval. The gate-delay interval is simply the time between application of a turn-on pulse at the gate and the time the initial area turns on. This delay decreases with increasing gate drive current and is of the order of a few microseconds. The second interval, the time required for turn-on of the initial area, is quite short, typically less than 1 ms. In general, the initial area turned on is a small percentage of the total useful device area. After the initial area turns on, conduction spreads (spreading interval) throughout the device in tens of microseconds. It is during this spreading interval that the di/dt limit must not be exceeded. Typical di/dt values range from 100 to 1000 A/ms. Special inverter-type SCRs and GTOs are made that have increased switching speed (at the expense of higher forward voltage drop during conduction) with di/dt values in the range of 2000 A/ms. The rate

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.18

POWER ELECTRONIC DEVICES 13.18

POWER ELECTRONICS

of application of forward voltage is also restricted by the dv/dt characteristic. Typical values range from 100 to 1000 V/ms. Thyristors are available in a wide variety of packages, from small plastic ones for low-power (i.e., TO-247), to stud-mount packages for medium-power, to press-pack (also called flat-pack) for the highest power devices. The press-packs must be mounted under pressure to obtain proper electrical and thermal contact between the device and the external metal electrodes. Special force-calibrated clamps are made for this purpose.

OTHER POWER SEMICONDUCTOR DEVICES Semiconductor materials such as silicon carbide (SiC), gallium arsenide (GaAs), and gallium nitride (GaN) are being used to develop pn-junction and Schottky diodes, power MOSFET structures, some thyristors, and other switches. SiC diodes are commercially available now. No other commercial power devices made from these materials yet exist, but will likely be available in the future. Further information about advanced power semiconductor materials and device structures can be found in (Baliga, 1996) and (Hudgins, 1993, 1995, 2003).

BIBLIOGRAPHY Azuma, M. and M. Kurata, “GTO thyristors,” IEEE Proc., pp. 419–427, April, 1988. Baliga, B. J., “Modern Power Devices,” Wiley, 1987. Baliga, B. J., “Power Semiconductor Devices,” PWS, 1996. Busatto, G., G. F. Vitale, G. Ferla, A. Galluzzo, and M. Melito, “Comparative analysis of power bipolar devices,” IEEE PESC Rec., pp. 147–153, 1990. Cotorogea, M., A. Claudio, and J. Aguayo, “Analysis by measurements and circuit simulations of the PT- and NPT-IGBT under different short-circuit conditions,” IEEE APEC Rec., pp. 1115–1121, 2000. Fujihira, T., and Y. Miyasaka, “Simulated superior performances of semiconductor superjunction devices,” IEEE Proc. ISPSD, pp. 423–426, 1998. Ghandhi, S. K., “Semiconductor Power Devices,” Wiley, 1977. Hefner, A. R., “A dynamic electro-thermal model for the IGBT,” IEEE Industry Appl. Soc. Ann. Mtg. Rec., pp. 1094–1104, 1992. Hower, P. L., “Power semiconductor devices: An overview,” IEEE Proc., Vol. 76, pp. 335–342, April, 1988. Hudgins, J. L., “A review of modern power semiconductor devices,” Microelect. J., Vol. 24, pp. 41–54, 1993. Hudgins, J. L., “Streamer model for ionization growth in a photoconductive power switch,” IEEE Trans. PEL, Vol. 10, pp. 615–620, September, 1995. Hudgins, J. L., G. S. Simin, E. Santi, and M. A. Khan, “A new assessment of wide bandgap semiconductors for power devices,” IEEE Trans. PEL, Vol. 18, pp. 907–914, May 2003. Motto, E. R., J. F. Donlon, H. Takahashi, M. Tabata, and H. Iwamoto, “Characteristics of a 1200 V PT IGBT with trench gate and local lifetime control,” IEEE IAS Annual Mtg. Rec., pp. 811–816, 1998. Nishizawa, J., T. Terasaki, and J. Shibata, “Field effect transistor versus analog transistor: static induction transistor,” IEEE Tran. ED, Vol. ED-22, pp. 185–197, 1975. Santi, E., A. Caiafa, X. Kang, J. L. Hudgins, P. R. Palmer, D. Goodwine, and A. Monti, “Temperature effects on trench-gate IGBTs,” IEEE IAS Annual Mtg. Rec., pp. 1931–1937, 2001. Sze, S. M., “Physics of Semiconductor Devices,” 2nd ed., Wiley, 1981. Venkataramanan, G., A. Mertens, H. Skudelny, and H. Grunig, “Switching characteristics of field controlled thyristors,” Proc. EPE—MADEP ’91, pp. 220–225, 1991.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.19

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 13.2

NATURALLY COMMUTATED CONVERTERS Arthur W. Kelley

INTRODUCTION The applications for this family of naturally commutated converters embrace a very wide range, including dc power supplies for electronic equipment, battery chargers, dc power supplies delivering many thousands of amperes for electrochemical and other industrial processes, high-performance reversing drives for dc machines rated at thousands of horsepower, and high-voltage dc transmission at the gigawatt power level. The basic feature common to this class of converters is that one set of terminals is connected to an ac voltage source. The ac source causes natural commutation of the converter power electronic devices. In these converters, a second set of terminals operates with dc voltage and current. This class of converters is divided in function depending on the direction of power flow. In ac-to-dc rectification, the ac source, typically the utility line voltage, supplies power to the converter, which in turn supplies power to a dc load. In dc-to-ac inversion, a dc source, typically a battery or dc generator, provides power to the converter, which in turn transfers the power to the ac source, again, usually the utility line voltage. Because natural commutation synchronizes the power semiconductor device turn on and turn off to the ac source, this converter is also known as a synchronous inverter or a line-commutated inverter. This process is different from supplying power to an ac load, which usually requires forced commutation. The power electronic devices in these converters are typically either silicon controlled rectifiers (SCRs) diodes. To simplify the discussion that follows, the SCRs and diodes are assumed to (1) conduct forward current with zero forward voltage drop, (2) block reverse voltage with zero leakage current, and (3) switch instantaneously between conduction and blocking. Furthermore, stray resistive loss is ignored and balanced three-phase ac sources are assumed.

Converter Topologies Basic Topologies. The number of different converter topologies is very large (Schaeffer, 1965; Pelly, 1971; Dewan, 1975; Rashid, 1993). Using SCRs as the power electronic devices, Table 13.2.1 illustrates four basic topologies from which many others are derived. These ac-to-dc converters are rectifiers that provide a dc voltage VO to a load. The rectifier often uses an output filter inductor LO and capacitor CO, but one or the other or both are often omitted. Rectifiers are usually connected to the ac source through a transformer. Note that the transformer is often utility equipment, located separately from the rectifier. The transformer adds a series leakage inductance LS, which is often detrimental to rectifier operation. Rectifier topologies are classified by whether the rectifier operates from a single- or three-phase source and whether the rectifier uses a bridge connection or transformer midpoint connection. The single-phase bridge rectifier shown in Table 13.2.1a requires four SCRs and a two-winding transformer. The single-phase midpoint

13.19 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.20

NATURALLY COMMUTATED CONVERTERS 13.20

POWER ELECTRONICS

TABLE 13.2.1 Basic Converter Topologies

rectifier shown in Table 13.2.1b requires only two SCRs but requires a transformer with a center-tapped secondary to provide the midpoint connection. The three-phase bridge rectifier shown in Table 13.2.1c requires six SCRs and three two-winding transformers. The three-phase midpoint rectifier shown in Table 13.2.1d requires three SCRs and three transformers using a Y-connected “zig-zag” secondary. The Y-connected secondary provides the necessary midpoint connection and the zig-zag winding prevents unidirectional secondary winding currents from causing magnetic saturation of the transformers. The bridge rectifier is better suited to using the simple connection provided by the typical utility transformer. For the same power delivered to the load, the bridge rectifier often requires a smaller transformer. Therefore, in the absence of other constraints, the bridge rectifier is often preferred over the midpoint rectifier. Pulse Number. Converters are also classified by their pulse number q, an integer that is the number of current pulses appearing in the rectifier output current waveform iX per cycle of the ac source voltage. Higher pulse number rectifiers generally have higher performance but usually with a penalty of increased complexity. Of the rectifiers shown in Table 13.2.1, both single-phase rectifiers are two-pulse converters (q = 2) with one current pulse in iX for each half-cycle of the ac source voltage. The three-phase midpoint rectifier is a threepulse converter (q = 3) with one current pulse in iX for each cycle of each phase of the three-phase ac source voltage. The three-phase bridge rectifier is a six-pulse converter (q = 6) with one current pulse in iX for each half cycle of each phase of the three-phase ac source voltage.

BASIC CONVERTER OPERATION Given a certain operating point, rectifier operation and performance are dramatically influenced by the values of source inductance LS, output filter inductance LO, and output filter capacitance CO. Operation with Negligible Ac Source Inductance. Figures 13.2.1 and 13.2.2 show example time waveforms for the single- and three-phase bridge rectifiers of Table 13.2.1a and 13.2.1c, respectively. In these

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.21

NATURALLY COMMUTATED CONVERTERS NATURALLY COMMUTATED CONVERTERS

FIGURE 13.2.1 Time waveforms for single-phase bridge rectifier with a = 40°: (a) CCM and (b) DCM.

FIGURE 13.2.2 Time waveforms for three-phase bridge rectifier with a = 20°: (a) CCM and (b) DCM.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

13.21

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.22

NATURALLY COMMUTATED CONVERTERS 13.22

POWER ELECTRONICS

examples LS is comparatively small and its influence is neglected. The value of CO is relatively large so that the ripple in the output voltage VO is relatively small. Operation of single- and three-phase phase-controlled rectifiers is described in detail in (Kelley, 1990). Figure 13.2.1a shows time waveforms for the single-phase bridge rectifier when the current iX in LO flows continuously without ever falling to zero. The rectifier is said to be operating in the continuous conduction mode (CCM). The CCM occurs for relatively large LO, heavy loads, and small a. Figure 13.2.1b shows time waveforms for the single-phase bridge rectifier when iX drops to zero twice each cycle and the rectifier is said to be operating in the discontinuous conduction mode (DCM). The DCM occurs for relatively small LO, light loads, and large a. Figure 13.2.1 also shows the conduction intervals for SCRs Q1 to Q4 and the rectifier voltage, vX. A controller, not shown in Fig. 13.2.1, generates gating pulses for the SCRs. The controller gates each SCR at a firing angle a (alpha) with respect to a reference that is the point in time at which the SCR is first forward biased. The SCR ceases conduction at the extinction angle b (beta). The reference, a, and b for Q1 are illustrated in Fig. 13.2.1. The SCR conduction angle g (gamma) is the difference between b and a. In DCM the SCR ceases conduction because iX falls naturally to zero, while in the CCM the SCR ceases conduction even though iX is not zero because the opposing SCR is gated and begins conducting iX. Therefore in CCM, g is limited to a maximum of one-half of an ac source voltage cycle, while in DCM g depends on LO, load, and a. Note that vX equals vS when Q1 and Q4 are conducting and that vX equals –vS when Q2 and Q3 are conducting. The output filter LO and CO reduces the ripple in vX and delivers a relatively ripple-free voltage VO to the load. The firing angle a determines the composition of vX and ultimately the value of VO. Increasing a reduces VO and is the mechanism by which the controller regulates VO against changes in ac source voltage and load. This method of output voltage regulation is referred to as phase control, and a rectifier using it is said to be a phase-controlled rectifier. Since in CCM the conduction angle is always one half of a source voltage cycle, the dc output voltage is easily found from vX as VO =

2 2 VS cos α π

(1)

where VS is the rms value of the transformer secondary voltage vS. Unfortunately, the conduction angle in DCM depends on LO, the load, and a, and VO cannot be calculated except by numerical methods. For the three-phase bridge rectifier, Figures 13.2.2a and 13.2.2b show time waveforms for CCM and DCM, respectively. Operation is similar to the single-phase rectifier except that vX equals each of the six line-to-line voltages—vAB, vAC, vBC, vBA, vCA, and vCB—in succession. In CCM, the SCR conduction angle g is one-third of an ac source voltage cycle, and in DCM g depends on LO, load, and a. In CCM the dc output voltage VO is found from vX as VO =

FIGURE 13.2.3 Time waveforms for three-phase bridge rectifier with appreciable LS.

3 3 2 VS cos α π

(2)

where VS is the rms value of the transformer secondary line-to-neutral voltage. In DCM, the value of VO must be calculated by numerical means. To produce a ripple-free output voltage VO , the time waveform of vX for the threephase rectifier naturally requires less filtering than the time waveform of vX for the single-phase rectifier.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.23

NATURALLY COMMUTATED CONVERTERS NATURALLY COMMUTATED CONVERTERS

13.23

Therefore, if a three-phase ac source is available, a three-phase rectifier is always preferred over a single-phase rectifier. Operation with Appreciable Ac Source Inductance. The preceding discussion assumes that the value of LS is small and does not influence circuit operation. In practice the effect of LS must often be considered. The threephase rectifier CCM time waveforms of Fig. 13.2.2 are repeated in Fig. 13.2.3 but for an appreciable LS. Since iX is always nonzero in CCM, the principal effect of LS is to prevent instantaneous transfer of iX from one transformer secondary winding to the next transformer secondary winding as the SCRs are gated in succession. This process is called commutation and the interval during which it occurs is called commutation overlap. For example, at some point in time Q1 is conducting iSA equal to iX and Q3 is gated by the controller. Current iSA through Q1 falls while iSB through Q3 rises. During this interval both Q1 and Q3 conduct simultaneously and the sum of iSA and iSB is equal to iX. As a result transformer secondary vSA is directly connected to transformer secondary vSB effectively creating a line-to-line short circuit. This connection persists until iSA falls to zero and Q1 ceases conduction. The duration of the connection is the commutation angle m (mu). During this interval vSA experiences a positive-going voltage “notch” while vSB experiences a negative-going voltage notch. The enclosed area of the positive-going notch equals the enclosed area of the negative-going notch and represents the flux linkage or “volt seconds” necessary to produce a change in current through LS equal to iX. If LO is sufficiently large so that iX is relatively constant with value IX during the time that both SCRs conduct, then the notch area is used to find cosα − cos( µ + α ) =

2 (2π f LS I X / VS ) 3

(3)

which can be solved numerically for m. Note that the commutation angle is always zero in DCM since iX is zero when each SCR is gated to begin conduction.

CONVERTER POWER FACTOR Source Current Harmonic Composition. The time waveforms of the prior section show that the rectifier is a nonlinear load that draws a highly distorted nonsinusoidal waveform iS. Fourier series is used to decompose iS into a fundamental-frequency component with rms value IS(1) and phase angle fS(1) with respect to vS, and into harmonic-frequency components with rms value IS(h) where h is an integer representing the harmonic number. In general, the IS(h) are zero for even h. Furthermore, depending on converter pulse number q, certain IS(h) are also zero for some odd h. Apart from h = 1 for which IS(1) is always nonzero, the IS(h) are nonzero for h = kq ± 1 (k integer ≥ 1)

(4)

Therefore harmonic currents for certain harmonic numbers are eliminated for higher pulse numbers. For example, the single-phase bridge rectifier with q = 2 produces nonzero IS(h) for h = 1, 3, 5, 7, 9, . . . , while the threephase bridge rectifier with q = 6 produces nonzero IS(h) for h = 1, 5, 7, 11, 13, . . . . If rectifier operation is unbalanced, then harmonics are produced for all h. An unbalanced condition can result from asymmetrical gating of the SCRs or from voltage or impedance unbalance of the ac source. The effect is particularly pronounced for three-phase rectifiers with a comparatively small LO and a comparatively large CO since these rectifiers act like “peak detectors” and CO charges to the point where VO approaches the peak value of the line-to-line voltage. One phase needs to be only several percent below the other two phases for it to conduct a greatly reduced current and shift most of the current to the other two phases. An unbalanced condition is always evident from the waveform of iX because the heights of the pulses are not all the same. Power Factor.

The rms value IS of iS is found from I S = I S (1) 2 + ∑ I S ( h ) 2 h >1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(5)

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.24

NATURALLY COMMUTATED CONVERTERS 13.24

POWER ELECTRONICS

The ac source is rated for apparent power S, which is the product of VS and IS (in volt-amperes, VA). However, the source delivers real input power PI (in watts, W), which the rectifier converts to dc and supplies to the load. The total power factor PF is the ratio of the real input power and the apparent power supplied by the ac source PF =

PI P = I S VS I S

(6)

and measures the fraction of the available apparent power actually delivered to the rectifier. The source voltage vS is an undistorted sine wave only if LS is negligible. In this case power is delivered only at the fundamental frequency so that PI = VSIS(1) cos fS(1)

(7)

Note that the harmonics IS(h) draw apparent power from the source by increasing IS but do not deliver real power to the rectifier. Using this assumption, the expression for power factor reduces to PF = cos φS (1)

I S (1) (8)

IS

The displacement power factor cosfS(1) is the traditional power factor used in electric power systems for sinusoidal operation and is unity when the fundamental of iS is in phase with vS. The purity factor IS(1)/IS is unity when iS is a pure sine wave and the rms values of IS(h) are zero so that IS equals IS(1). The distortion of iS is often and equivalently represented by the total harmonic distortion for current THDi

THDi = 100

I S (h) ∑ h >1 I S (1)

2

= 100

1 −1 ( I S (1) /I S )2

(eexpressed in percent)

(9)

The purity factor IS(1)/IS is also called the distortion power factor, which is easily confused with the total harmonic distortion THDi. The theoretical maximum power factor for the single-phase bridge rectifier is 0.90, which occurs for a = 0° and usually requires an uneconomically large value of LO. The actual power factor often ranges from 0.5 to 0.75. The theoretical maximum power factor for the three-phase bridge rectifier is 0.96 which also occurs for a = 0°. Because the three-phase bridge rectifier requires less filtering, it is often possible to approach this theoretical maximum power factor with an economical value of LO. However, for cost reasons, LO is often omitted in both the single- and three-phase rectifiers which dramatically reduces the power factor and leaves it to depend on the value of LS. Source Voltage Distortion and Power Quality. The time waveforms of Fig. 13.2.3 show that with appreciable LS the rectifier distorts the voltage source vS supplying the rectifier. Fourier series is also used to represent vS as a fundamental voltage of rms value VS(1) and harmonic voltages of rms value VS(h). The distortion of vS is often represented by the total harmonic distortion for voltage THDv

THDv = 100

VS ( h ) ∑ h >1 VS (1)

2

(expressed in percentt)

(10)

Note that the definition of power factor (Eq. (7)) is valid for appreciable LS and distorted vS but (Eq. (8)) is strictly valid only when LS is negligible and vS is undistorted. Voltage distortion can cause problems for other loads sharing the rectifier ac voltage source. Computerbased loads, which have become very common, appear to be particularly sensitive. Issues of this kind have been receiving increased attention and fall under the general heading of power quality. Increasingly strict

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.25

NATURALLY COMMUTATED CONVERTERS NATURALLY COMMUTATED CONVERTERS

13.25

power factor and harmonic current limits are being placed on ac-to-dc converters (IEEE-519, 1992; IEC-1000, 1995). In particular, limits on the total harmonic distortion of the current THDi, the rms values IS(h) of the harmonics, and the rms values of the harmonics relative to the fundamental IS(h)/IS(1) are often specified. These limits present new challenges to the designers of ac-to-dc converters.

ADDITIONAL CONVERTER TOPOLOGIES This section summarizes the large number of converters that are based on the rectifiers of Table 13.2.1. These converters are shown in Table 13.2.2. Uncontrolled Diode Rectifier. Replacing SCRs with diodes produces an uncontrolled rectifier as shown in Table 13.2.2a. In contrast to the SCRs, which are gated by a controller, the diodes begin conduction when initially forward biased by the circuit so that an uncontrolled diode rectifier behaves like a phase-controlled rectifier operated with a = 0°. Details of uncontrolled diode rectifier operation are described in Kelley (1992). Half-Controlled Bridge Rectifier. In the half-controlled bridge rectifier the even-numbered SCRs (Q2 and Q4 for the single-phase rectifier, and Q2, Q4, and Q6 for the three-phase rectifier) are replaced with diodes as shown in Table 13.2.2b. The remaining odd-numbered SCRs (Q1 and Q3 for the single-phase rectifier, and Q1, Q3, and Q5 for the three-phase rectifier) are phase controlled to regulate the dc output voltage VO. This substitution is advantageous because diodes are cheaper than SCRs and the cathodes of the remaining SCRs are connected to a common point that simplifies SCR gating. Note that the diodes begin conduction when first forward biased while the SCRs begin conduction only after being gated while under forward bias. As a result, during a certain portion of each cycle, iX freewheels through the series connection of a diode and SCR, thereby reducing iS to zero. For example, in the single-phase bridge rectifier iX freewheels through Q1 and D2 for one part of the cycle and through Q3 and D4 for another part of the cycle. In the three-phase rectifier iX freewheels through Q1 and D2, Q3 and D4, and Q5 and D6 during different parts of the cycle. This freewheeling action prevents vX from changing polarity and improves rectifier power factor as a increases and VO decreases. Freewheeling Diode. The same effect is achieved if a freewheeling diode DX is connected across terminals 1 and 2 of the rectifier as shown in Table 13.2.2c. The freewheeling diode is used with both the bridge and midpoint rectifier connections. Dc Motor Drive. Any of the phase-controlled rectifiers described above can be used as a dc motor drive by connecting the motor armature across terminals 1 and 2 as shown in Table 13.2.2d. Phase control of SCR firing angle a controls motor speed. Battery Charger. Phase-controlled rectifiers are widely used as battery chargers as shown in Table 13.2.2e. Phase control of SCR firing angle a regulates battery charging current. Line Commutated Inverter. A line-commutated inverter transfers power from the dc terminals 1 and 2 of the converter to the ac source. As shown in Table 13.2.2f, the dc terminals are connected to a dc source of power such as a dc generator or a battery. The polarity of each SCR is reversed and the rectifier is operated with a > 90°. This circuit is called a line-commutated inverter or a synchronous inverter. Note that the half-controlled bridge and the freewheeling diode cannot be used with a line-commutated inverter because they prevent a change in the polarity of vX. Operation with a > 90° causes the majority of the positive half cycle of iS to coincide with the negative half cycle of vS . Similarly, the negative half cycle of iS coincides with the positive half cycle of vS. It is this mechanism that, on average, causes power flow from the dc source into the ac source. In principal a could approach 180°, but in practice a must be limited to 160° or less to permit sufficient time for the SCRs to stop conducting and regain forward voltage blocking capability before forward voltage is reapplied to them. This requirement is particularly important when LS is appreciable.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.26

NATURALLY COMMUTATED CONVERTERS 13.26

POWER ELECTRONICS

TABLE 13.2.2 Additional Converter Topologies

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.27

NATURALLY COMMUTATED CONVERTERS NATURALLY COMMUTATED CONVERTERS

13.27

TABLE 13.2.2 Additional Converter Topologies (Continued)

(Continued)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.28

NATURALLY COMMUTATED CONVERTERS 13.28

POWER ELECTRONICS

TABLE 13.2.2 Additional Converter Topologies (Continued )

Alternate Three-Phase Transformer Connections. Both primary and secondary transformer windings may be either Y-connected or ∆-connected, as shown in Table 13.2.2g except for the midpoint connection, which requires a Y-connected secondary winding. If the connection is Y-Y or ∆-∆ the waveform of secondary current iS is scaled by the transformer turn ratio to become the waveform of primary current iP. Therefore, the secondary current fundamental IS(1) and harmonics IS(h), when scaled by the turn ratio, become the primary fundamental IP(1) and harmonics IP(h). Similarly, if the transformer connection is Y-∆ or ∆-Y, the secondary current fundamental IS(1) and harmonics IS(h), when scaled by the turn ratio, become the primary fundamental IP(1) and harmonics IP(h). However, the Y-∆ and ∆-Y transformer connections introduce different phase shifts for each harmonic so that the primary current waveform iP differs in shape from the secondary current wave-form iS. Rectifier power factor remains unchanged; however, this phase shift is used to produce harmonic cancellation in rectifiers with high pulse numbers as described subsequently. Bidirectional Converter. Many applications require bidirectional power flow from a single converter. Table 13.2.2h illustrates one example in which a phase-controlled rectifier is, effectively speaking, connected in parallel with a line commutated inverter by replacing each SCR with a pair of SCRs connected in antiparallel. The load is replaced either by a battery or by a dc motor. In the bidirectional converter, one polarity of SCRs is used to transfer energy from the ac source to the battery or motor while the opposite polarity of SCRs is used to reverse the power flow and transfer energy from the battery or motor to the ac source. For example, using the battery, the converter operates as a battery charger to store energy when demand on the utility is low and at a subsequent time the converter operates as a line commutated inverter to supply energy

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.29

NATURALLY COMMUTATED CONVERTERS NATURALLY COMMUTATED CONVERTERS

13.29

when demand on the utility is high. Using a dc motor, the converter operates as a dc motor drive to supply power to a rotating load. Depending on the direction of motor rotation and on which polarity of SCRs is used, the bidirectional converter can both brake the motor efficiently by returning the energy stored in the rotating momentum back to the ac source and subsequently reverse the motors direction of rotation. Active Power Factor Corrector. In many instances the basic converter topologies of Table 13.2.1 and the additional converter topologies of Table 13.2.2a to 13.2.2g cannot meet increasingly strict power factor and harmonic current limits without the addition of expensive passive filters operating at line frequency. The active power factor corrector, illustrated in Table 13.2.2i, is one solution to this problem (Rippel, 1979; Kocher, 1982; Latos, 1982). The output filter inductor LO is replaced by a high-frequency filter and a dcto-dc converter. The dc-to-dc converter uses high-frequency switching and a fast control loop to actively control the waveshape of iX, and therefore control the waveshape of iS, for near unity displacement power factor and near unity purity factor resulting in near unity power factor ac-to-dc conversion (Huliehel, 1992). A high-frequency filter is required to prevent dc-to-dc converter switching noise from reaching the ac source. A slower control loop regulates VO against changes in source voltage and load. Because the dc-to-dc converter regulates VO over a wide range of source voltage, the active power factor corrector can be designed for a universal input that allows the corrector to operate from nearly any ac voltage source. The active power factor corrector is used most commonly for lower powers. Higher Pulse Numbers. When strict power factor and harmonic limits are imposed at higher power levels, and the active factor corrector cannot be used, the performance of the basic rectifier is improved by increasing the pulse number q and elimination of current harmonics IS(h) for certain harmonic numbers as shown by Eq. (4). Table 13.2.2j and 13.2.2k illustrate two examples based on the three-phase six-pulse bridge rectifier (q = 6) of Table 13.2.1c. The six-pulse rectifiers are shown connected in series in Table 13.2.2j and in parallel in Table 13.2.2k. The parallel connection in Table 13.2.2k requires an interphase reactor to prevent commutation of the SCRs in one rectifier from interfering with commutation of the SCRs in the other rectifier. The interphase reactor also helps the two rectifiers share the load equally. Both approaches use a Y-Y transformer connection to supply one six-pulse rectifier and a ∆-Y transformer connection to supply the second six-pulse rectifier. The primary-to-secondary voltage phase shift of the ∆-Y transformer means the two rectifiers operate out of phase with each other producing 12-pulse operation (q = 12). As described previously, the ∆-Y transformer also produces a secondary-to-primary phase shift of the current harmonics. As a result, at the point of connection to the ac source, harmonics from the ∆-Y connected six-pulse rectifier cancel the harmonics from the Y-Y connected six-pulse rectifier for certain harmonic numbers. For example, the harmonics cancel for h = 5 and 7, but not for h = 11 and 13. Thus the total harmonic distortion for current THDi and the total power factor for the 12-pulse converter is greatly improved in comparison to either six-pulse converter alone. The 12-pulse output voltage ripple filtering requirement is also greatly reduced compared to a single six-pulse rectifier. This principle can be extended to even higher pulse numbers by using additional six-pulse rectifiers and transformer phase-shift connections. High Voltage Dc Transmission. High Voltage dc (HVDC) Transmission is a method for transmitting power over long distances while avoiding certain problems associated with long distance ac transmission. This requirement often arises when a large hydroelectric power generator is located a great distance from a large load such as a major city. The hydroelectric generator’s relatively low ac voltage is stepped up by a transformer, and a phase-controlled rectifier converts it to a dc voltage of a megavolt or more. After transmission over a long distance, a line commutated inverter and transformer convert the dc back to ac and supply the power to the load. Alternately, the rectifier and inverter are co-located and used as a tie between adjacent utilities. The arrangement can be used to actively control power flow between utilities and to change frequency between adjacent utilities operating at 50 and 60 Hz. With power in the gigawatt range, this is perhaps the highest power application of a power electronic converter. To ensure a stable system, the control algorithms of the rectifier and inverter must be carefully coordinated. Note that since the highest voltage rating of an individual SCR is less than 10 kV, many SCRs are connected in series to form a valve capable of blocking the large dc voltage. Both the rectifier and inverter use a high pulse number to minimize filtering at the point of connection to the ac source.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.30

NATURALLY COMMUTATED CONVERTERS 13.30

POWER ELECTRONICS

REFERENCES Dewan, S. B., and A. Straughen, “Power Semiconductor Circuits,” Wiley, 1975. Huliehel, F. A., F. C. Lee, and B. H. Cho, “Small-signal modeling of the single-phase boost high power factor converter with constant frequency control,” Record of the 1992 IEEE Power Electronics Specialists Conference (PESC ’92), pp. 475–482, June 1992. IEC-1000 Electromagnetic Compatibility (EMC), Part 3: Limits, Section 2: Limits for Harmonic Current Emissions (formerly IEC-555-2), 1995. IEEE Standard 519, “IEEE Recommended Practices and Requirements for Harmonic Control in Electrical Power Systems,” 1992. Kelley, A. W., and W. F. Yadusky, “Phase-controlled rectifier line-current harmonics and power factor as a function of firing angles and output filter inductance,” Proc. IEEE Applied Power Electronics Conf., pp. 588–597, March 1990. Kelley, A. W., and W. F. Yadusky, “Rectifier design for minimum line-current harmonics and maximum power factor,” IEEE Trans. Power Electronics, Vol. 7, No, 2, pp. 332–341, April 1992. Kocher, M. J., and R. L. Steigerwald, “An ac-to-dc converter with high-quality input waveforms,” Record of the 1982 IEEE Power Electronics Specialists Conference (PESC ’82), pp. 63–75, June 1982. Latos, T. S., and D. J. Bosak, “A high-efficiency 3-kW switchmode battery charger,” Record of the 1982 IEEE Power Electronics Specialists Conference (PESC ’82), pp. 341–349, June 1982. Pelley, B. R., “Thyristor Phase-Controlled Converters and Cycloconverters,” Wiley, 1971. Rashid, M. “Power Electronics: Circuits, Devices, and Applications,” 2nd ed., Prentice Hall, 1993. Rippel, W. E., “Optimizing boost chopper charger design,” Proceedings of the Sixth National Solid-State Power Conversion Conference (POWERCON6), pp. D1-1–D1-20, 1979. Schaeffer, J., “Rectifier Circuits: Theory and Design,” Wiley, 1965.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.31

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 13.3

DC-DC CONVERTERS Philip T. Krein

INTRODUCTION Power conversion among different dc voltage and current levels is important in applications ranging from spacecraft and automobiles to personal computers and consumer products. Power electronics technology can be used to create a dc transformer function for power processing. Today, most dc power supplies rectify the incoming ac line, then use a dc-dc converter to provide a transformer function and produce the desired output voltages. Dc-dc designs often use input voltages near 170 V (the peak value of rectified 120-V ac) or 300–400 V (the peak values of 230, 240 V ac, and many three-phase sources). For direct dc inputs, 48-V sources and 28-V sources reflect practice in the telecommunications and aerospace industries. Universal input power supplies commonly handle rectified power from sources ranging between 85 and 270 V ac. There are a number of detailed treatments of dc-dc converters in the literature. The book by Severns and Bloom (1985) explores a wide range of topologies. Mitchell (1988) offers detailed analysis and extensive treatment of control issues. Chryssis (1989) compares topologies from a practical standpoint, and addresses many aspects associated with actual implementation. Middlebrook has published several exhaustive treatments of various topologies; one example (Middlebrook, 1989) addresses some key attributes of analysis and control. The discussion here follows the treatment in Krein (1998). More recently, Erickson and Maksimovich (2001) have detailed many operation and control aspects of modern dc-dc converters.

DIRECT DC-DC CONVERTERS The most general dc-dc conversion process is based on a switch matrix that interconnects two dc ports. The two dc ports need to have complementary characteristics, since Kirchhoff’s laws prohibit direct interconnection of unlike voltages or currents. A generic example, called a direct converter, is shown in Fig. 13.3.1a. A more complete version is shown in Fig. 13.3.1b, in which an inductor provides the characteristics of a current source. In the figure, only four switch combinations can be used without shorting the voltage source or opening the current source: Combination

Result

Close 1,1 and 2,2 Close 2,1 and 2,2 Close 1,1 and 1,2 Close 1,2 and 2,1

Vd Vd Vd Vd

= = = =

VIN 0 0 –VIN

13.31 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:00 AM

Page 13.32

DC-DC CONVERTERS 13.32

POWER ELECTRONICS

FIGURE 13.3.1 Dc voltage to dc current direct converter: (a) general arrangement; (b) circuit realization.

Switch action selects among –Vin, 0, and +Vin to provide a desired average output. The energy flow is controlled by operating the switches periodically, then adjusting the duty ratio to manipulate the average behavior. Duty-ratio control, or pulse width modulation, is the primary control method for most dc-dc power electronics. The switching frequency can be chosen somewhat arbitrarily with this method. Typical rates range from 50 kHz to 500 kHz for converters operating up to 200 W, and from 20 kHz to 100 kHz for converters operating up to 2 kW. In dc-dc circuits that use soft switching or resonant switching techniques, the switching frequency is usually adjusted to match internal circuit resonances. Switching functions q(t) can be defined for each switch in the converter. A switching function has the value 1 when the associated switch is on, and 0 when it is off. The converter voltage vd in Figure 13.3.1b can be written in terms of the switching functions in the compact form vd (t) = q1,1q2,2Vin – q1,2q2,1 Vin

(1)

For power flow in one direction, two switches suffice. The usual practice is to establish a common ground between the input and output, equivalent to permanently turning on switch 2,2 and turning off switch 1,2 in Fig. 13.3.1. This simplified circuit is shown in Fig. 13.3.2. The transistor is generic: a BJT, MOSFET, IGBT, or other fully controlled device can be used. The voltage vd(t) in this circuit becomes vd (t) = q1(t)Vin. Of interest is the dc or average value of the output, indicated by the angle brackets as 〈vout(t)〉. The inductor cannot sustain an average voltage in the periodic steady state, so the resistor voltage average value 〈vout(t)〉 = 〈vd(t)〉. The voltage vd(t) is a pulse train with period T, amplitude Vin, and an average value related to the duty ratio of the switching function. Therefore, 〈vd (t )〉 = 〈vout (t )〉 =

1 T

T

∫0 q1(t )Vin dt = D1Vin

(2)

where T is the switching period and D1 is the duty ratio of the transistor. (Typically, an average value in a dcdc converter is equivalent to substituting a switch duty ratio D for a switching function q.) If the L-R pair serves

FIGURE 13.3.2 converter).

Common-ground direct converter

(buck

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.33

DC-DC CONVERTERS DC-DC CONVERTERS

13.33

TABLE 13.3.1 Buck Converter Characteristics Characteristic

Value

Input–output relationships Device ratings Open-loop load regulation Open-loop line regulation Ripple

Vout = D1Vin, Iin = D1Iout Must handle Vin when off, and Iout when on. Load has no direct effect on output. No line regulation: line changes are reflected directly at the output. Feedback control is required. Governed by choice of inductor. Typically 50 mV peak–to–peak or 1 percent (whichever is higher) can be achieved.

as an effective low-pass filter, the output voltage will be a dc value Vout = 〈vd(t)〉. The circuit has the basic characteristics of a transformer, with the restriction that Vout ≤ Vin. The name buck converter is used to reflect this behavior. The buck converter, sometimes called a buck regulator, or a step-down converter, is the basis for many more sophisticated dc-dc converters. Some of its characteristics are summarized in Table 13.3.1. Load regulation is perfect in principle if the inductor maintains current flow. Line regulation requires closed-loop control. Although these relationships are based on ideal, lossless switches, the analysis process applies to more detailed circuits. The following example illustrates the approach. Example: Relationships in a dc-dc converter. The circuit of Fig. 13.3.3 shows a dc-dc buck converter with switch on-state voltage drops taken into account. What is the input–output voltage relationship? How much power is lost in the converter? To analyze the effect of switch voltages, KVL and KCL relations can be written in terms of switching functions. When averages are computed, variables such as inductor voltages and capacitor currents are eliminated since these elements cannot sustain dc voltage and current, respectively. Circuit laws require vd(t) = q1(t) (Vin – Vs1) – q2(t)Vs2, Vout = vd(t) – vL iin(t) = q1(t)IL, Iout(t) = IL – iC(t)

(3)

In this circuit, the inductor will force the diode to turn on whenever the transistor is off. This can be represented with the expression q1(t) + q2(t) = 1. When the average behavior is computed, the duty ratios must follow the relationship D1 + D2 = 1. The average value of vd(t) must match Vout, and the average input will be the duty ratio of switch 1 multiplied by the inductor current. These relationships reduce to

FIGURE 13.3.3 Buck converter with switch forward drop models.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.34

DC-DC CONVERTERS 13.34

POWER ELECTRONICS

FIGURE 13.3.4 Buck converter waveforms.

Vout = D1(Vin – Vs1 + Vs2) – Vs2 〈iin(t)〉 = D1Iout Pin = 〈iin(t)Vin〉 = D1VinIout

(4)

Pout = VoutIout = D1(Vin – Vs1 + Vs2)Iout – Vs2Iout When the switch voltage drops Vs1 and Vs2 have similar values, the loss fraction is approximately the ratio of the diode drop to the output voltage. The primary design considerations are to choose an inductor and capacitor to meet requirements on output voltage ripple. The design process is simple if a small ripple assumption is used: since Vout is nearly constant, the voltage across the inductor will be a pulsed waveform at the switching frequency. The current iL(t) will exhibit triangular ripple. Some waveform samples appear in Fig. 13.3.4. The current swings over its full peakto-peak ripple during either the transistor on time or the diode on time. The example below takes advantage of the triangular variation to compute the expected ripple. Example: Buck converter analysis. A buck converter circuit with an R-L load and a switching frequency of 200 kHz is shown in Fig. 13.3.5. The transistor exhibits on-state drop of 0.5 V, while the diode has a 1 V forward drop. Determine the output ripple for 15 V input and 5 V output. From Eq. (5), the duty ratio of switch 1 should be 6/15.5 = 0.387. Switch 1 should be on for 1.94 ms, then off for 3.06 ms. At 5 V output, the inductor voltage is 9.5 V with switch 1 on and –6 V with switch 1 off. The inductor current has di/dt = (9.5 V)/(200 mH) = 47.5 kA/s when #1 is on and di/dt = –30 kA/s when #2 is on. During the on time of switch 1, the current increases (47500 A/s)·(1.93 ms) = 0.092 A. Here, the time constant L/R = 200 ms, which is 65 times the switch 2 on time. It would be expected that the current change is small and linear. The current change of 0.092 A produces an output voltage change of 0.092 V for this 1 Ω load. Figure 13.3.6 shows some of the important waveforms in idealized form. The output voltage is nearly constant, with Vout = 5 ± 0.046 V, consistent with the assumptions in the analysis. The output power is 25 W. The input average current is D1Iout = 0.387(5 A) = 1.94 A. The input power therefore is 29.0 W, and the efficiency is 86 percent. This neglects any energy consumed in the commutation process.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.35

DC-DC CONVERTERS DC-DC CONVERTERS

13.35

FIGURE 13.3.5 Buck converter example.

More generally, the buck converter imposes Vin – Vout on the inductor while the transistor is on. The current derivative di/dt over a given time interval is the linear change ∆iL/∆t. For the on-time interval ∆t = D1T, the peak-to-peak ripple ∆iL is ∆iL =

(Vin − Vout ) D1T Vin (1 − D1 ) D1T = L L

(5)

If only inductive filtering is used, the output voltage ripple is the load resistance times the current ripple. If an output capacitor is added across the load resistor, its effect can be found by treating the inductor as an equivalent triangular current source, then solving for the output voltage. Assuming that the capacitor handles the full ripple current, it is straightforward to integrate the triangular current to compute the ripple voltage. The process is illustrated in Fig. 13.3.7. Voltage vC(t) will increase whenever iC(t) > 0. The voltage increase is given by ∆vC =

1 C

T /2

∫0

iC (t ) dt

(6)

The integral is the triangular area 1/2 (T/2)(∆iL/2), so ∆vC =

T ∆iL 8C

FIGURE 13.3.6 Buck converter inductor voltage and output current.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(7)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.36

DC-DC CONVERTERS 13.36

POWER ELECTRONICS

FIGURE 13.3.7 Output ripple effect given the addition of capacitive filter.

This expression is accurate if the capacitor is large enough to provide significant voltage ripple reduction. An alternative direct dc-dc converter has a current source input and voltage source output. The relationships for this boost converter are dual to those of the buck circuit. The input current and output voltage act as fixed source values, while the input voltage and output current are determined by switch matrix action. For the common-ground version in Fig. 13.3.8b, the transistor and diode must operate in complement, so that q1 + q2 = 1 and D1 + D2 = 1. For ideal switches, q 1 + q2 = 1 vt(t) = q2Vout = (1 – q1)Vout iout = q2Iin = (1 – q1)Iin

(8)

〈vt〉 = D2Vout = (1 – D1)Vout 〈iout〉 = (1 – D1)Iin With these energy storage devices, notice that Vin = 〈vt〉 and Iout = 〈iout〉. The relationships can be written Vout =

1 V 1 − D1 in

and

I in =

1 I 1 − D1 out

(9)

The output voltage will be higher than the input. The boost converter uses an inductor at the input to create current-source behavior and a capacitor at the output to provide voltage source characteristics. The capacitor

FIGURE 13.3.8 Boost dc-dc converter: (a) general arrangement; (b) common-ground version.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.37

DC-DC CONVERTERS DC-DC CONVERTERS

13.37

TABLE 13.3.2 Relationships for the Boost Converter Characteristic

Value

Input–output relationships Open-loop load regulation Open-loop line regulation Device ratings Input–output relationships with switch drops Inductor ripple relationship

Vout = Vin/(1 – D1), Iout = Iin(1 – D1) Load does not alter output Unregulated without control Must handle Vout when off or Iin when on Vout = (Vin – Vs2 – D1Vs1 + D1Vs2)/(1 – D1), Iout = Iin(1 – D1) ∆iL = VinD1T/L ( if ∆iL is small compared to the input current) ∆vC = IoutD1T/C

Capacitor ripple relationship

is exposed to a square-wave current signal, and produces a triangular ripple voltage in response. Table 13.3.2 provides a summary of relationships, based on ideal switches.

INDIRECT DC-DC CONVERTERS Cascade arrangements of buck and boost converters are used to avoid limitations on the magnitude of Vout. A buck-boost cascade is developed in Fig. 13.3.9. This is an example of an indirect converter because at no point in time does power flow directly from the input to the output. Some of the switches in the cascade are redundant, and can be removed. In fact, only two switches are needed in the final result, shown in Fig. 13.3.10. The current source, called a transfer current source, has been replaced by an inductor. The voltage across the inductor, vt, is Vin when switch 1 is on, and –Vout when switch 2 is on. The transfer current source value is Is. The inductor cannot sustain dc voltage drop, so 〈vt〉 = 0. The voltage relationships are q1 + q2 = 1 vt = q1Vin – q2Vout 〈vt〉 = 0 = D1Vin – D2Vout

FIGURE 13.3.9 Cascaded buck and boost converters. (From Krein (1998), copyright  1998 Oxford University Press, Inc., U.S.; used by permission.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(10)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.38

DC-DC CONVERTERS 13.38

POWER ELECTRONICS

FIGURE 13.3.10 Buck-boost converter.

The last part of Eq. (10) requires D1Vin = D2Vout in steady state. The switches must act in complement, so D1 + D2 = 1, and Vout =

D1 V 1 − D1 in

(11)

A summary of results is given in Table 13.3.3. The cascade process produces a negative voltage with respect to the input. This polarity reversal property is fundamental to the buck-boost converter. A boost-buck cascade also allows full output range with a polarity reversal. As in the buck-boost case, many of the switches are redundant in the basic cascade, and only two switches are required. The final circuit, with energy storage elements in place, as shown in Fig. 13.3.11. The center capacitor serves as a transfer voltage source. The transfer source must exhibit 〈it〉 = 0, since a capacitor cannot sustain dc current. Some of the major relationships are summarized in Table 13.3.4. In the literature, this arrangement is called a C´uk converter, after the developer who patented it in the mid-1970s (Middlebrook and C´uk, 1977). The transfer capacitor must be able to sustain a current equal to the sum of the input and output currents. The RMS capacitor current causes losses in the capacitor’s internal equivalent series resistance (ESR), so low ESR components are usually required. Figure 13.3.12 shows the single-ended primary inductor converter or SEPIC circuit (Massey and Snyder, 1977). This is a boost-buck-boost cascade. As in the preceding cases, the cascade arrangement can be simplified to require only two switches. The transfer sources Ct and Lt carry zero average power to be consistent with a capacitor and an inductor as the actual devices. The relationships are vin = q2(Vout + Vt1), iout = q2(Iin + It2), it1 = –q1It2 + q2Iin,

〈vin〉 = D2(Vout + Vt1) 〈iout〉 = D2(Iin + It2) 〈it1〉 = 0 = – D1It2 + D2Iin

TABLE 13.3.3 Buck-Boost Converter Relationships Characteristic

Value

Input–output voltage relationship

|Vout| = VinD1/1 – D1), Iin = |Iout| D1/(1 – D1), output is negative with respect to input Must handle |Vin| + |Vout| when off, |Iin| + |Iout| when on IL = |Iin| – |Iout| Perfect load regulation. No open-loop line regulation ∆iL = VinD1T/L ∆vC = |Iout| D1T/C Ideally, any negative voltage can be produced

Device ratings Inductor current Regulation Inductor ripple current Capacitor ripple voltage Output range

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(12)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.39

DC-DC CONVERTERS DC-DC CONVERTERS

13.39

FIGURE 13.3.11 Boost-buck converter.

vt2 = –q1Vt1 + q2Vout, q1 + q2 = 1,

〈vt2〉 = 0 = –D1Vt1 + D2Vout

D1 + D2 = 1

Some algebra will bring out the transfer source values and input-output ratios: It 2 =

D2 1 − D1 I in = I D1 D1 in

Vt1 =

D2 1 − D1 V = V D1 out D1 out

(13)

  1− D 1 − D1 1 〈vin 〉 = D2  Vout + Vout  = V D1 D1 out   This is the same input–output ratio as the buck-boost converter, except that there is no polarity reversal. These and related indirect converters provide opportunities for the use of more sophisticated magnetics such as coupled inductors. In the C´uk converter, for example, the input and output filter inductors are often coupled on a single core to cancel out part of the ripple current (Middlebrook and C´uk, 1981). In a buck-boost converter, the transfer source inductor can be split by providing a second winding. One winding can be used to inject energy into the inductor, while the other can be used to remove it. The two windings provide isolation. This arrangement is known as a flyback converter because diode turn-on occurs when the inductor output coil voltage “flies back” as the input switch turns off. An example is shown in Fig. 13.3.13. The flyback converter is one of the most common low-power dc-dc converters. It is functionally equivalent to the buck-boost converter. This is easy to see if the turns ratio between the windings is unity. The possibility of a nonunity turns ratio is a helpful extra feature of the flyback converter. Extreme step-downs, such as the 170-V to 5-V converter, often used in a dc power supply, can be supported with reasonable duty ratios by selecting an appropriate turns ratio. In general, flyback converters are designed to keep the nominal duty ratio close to 50 percent. This tends to minimize the energy storage requirements, and keeps the sensitivity to variation as TABLE 13.3.4 Relationships for Boost-Buck Converter Characteristic Input–output relationships Device ratings Regulation

Value |Vout| = D1Vin/(1 – D1), Iin = D1|Iout|(1 – D1) Must handle |Vin| + |Vout| when off. Must handle |Iin| + |Iout| when on Perfect load regulation, no line regulation

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.40

DC-DC CONVERTERS 13.40

POWER ELECTRONICS

FIGURE 13.3.12 The SEPIC converter. Two transfer sources permit any input-to-output ratio without polarity reversal.

FIGURE 13.3.13 Flyback converter.

low as possible. An additional advantage appears when several different dc supplies are needed: if it is possible to use two separate coils on the magnetic core of the inductor, it should be just as reasonable to use three, four, or even more coils. Each can have its own turns ratio with mutual isolation. This is the basic for many types of multi-output dc power supplies. One challenge with a flyback converter, as shown in Fig. 13.3.14, is the primary leakage inductance. During transistor turn-off, the leakage inductance energy must be removed. A capacitor or other snubber circuit is used to avoid damage to the active switch during turn-off.

FIGURE 13.3.14 Leakage inductance issue in flyback converter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.41

DC-DC CONVERTERS DC-DC CONVERTERS

13.41

FORWARD CONVERTERS Coupled inductors in indirect converters, unlike transformers, must store energy and carry a net dc current. Basic buck and boost circuits lack a transfer source, so a coupled inductor will not give them isolation properties. Instead, a buck or boost converter can be augmented with a transformer, inserted at a location with only ac waveforms. Circuits based on this technique are called forward converters. A transformer can be added either by providing a catch winding tertiary or other circuitry for flux resetting, or by using an ac link arrangement. With either alternative, the objective is to avoid saturation because of dc current. Figure 13.3.15 shows the catch-winding alternative in a buck converter. The tertiary allows the core flux to be reset while the transistor is off. Operation is as follows: the transistor carries the primary current i1 and also the magnetizing current im when it is on. Voltage Vin is imposed on the primary, and the flux increases. When the transistor turns off, the magnetizing inductance will maintain the current flow in coil 1, such that i1 = –im. The diode D3 permits current i3 = im(N1/N3) to flow. The tertiary voltage v3 flies back to –Vin, resetting the flux. If N1 = N3, the duty ratio of switch 1 must not exceed 50 percent so that there will be enough time to bring the flux down sufficiently. If it is desired to reach a higher duty ratio, the ratio N1/N3 must be at least D1/(1 – D1). With a catch winding, the primary carries a voltage v1 = –N1/N3 after the transistor turns off. The transistor must be able to block Vin(1 + N1/N3) to support this voltage. For power supplies, this can lead to extreme ratings. For example, an off-line supply designed for 350 Vdc input with N1/N3 = 1.5 to support duty ratios up to 60 percent requires a transistor rating of about 1000 V. This extreme voltage rating is an important drawback of the catch-winding approach. The secondary voltage v2 in this converter is positive whenever the transistor is on. The diode D1 will exhibit the same switching function as the transistor, and the voltage across D2 will be just like the diode voltage of a buck converter except for the turns ratio. The output and its average value will be vout = q1Vin

N2 N , 〈vout 〉 = D1Vin 2 N1 N1

(14)

so this forward converter is termed a buck-derived circuit. The ac link configuration comprises an inverter-rectifier cascade, such as the buck-derived half-bridge converter in Fig. 13.3.16. With adjustment of duty ratio, a waveform such as that shown in Fig. 13.3.17a is typically used as the inverter output. The signal has no dc component, and therefore a transformer can be used. Once full-wave rectification is performed, the result will be the square wave of Fig. 13.3.17b. The average output is 〈vout〉 = 2aDVin

FIGURE 13.3.15 Catch-winding forward converter. (From Krein (1998), copyright  1998 Oxford University Press. Inc., U.S.; used by permission.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(15)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.42

DC-DC CONVERTERS 13.42

POWER ELECTRONICS

FIGURE 13.3.16 Half-bridge forward converter.

reflecting the fact that there are two output pulses during each switching period. No switch on the inverter side will be on more than 50 percent of each cycle, and the transistors block only Vin when off. Four other forward converter topologies are shown in Fig. 13.3.18. The full bridge circuit in particular has been used successfully for power levels up to a few kilowatts. The others avoid the complexity of four active switches, although possibly with a penalty. For example, the push-pull converter in the figure has the important advantage that both switch gate drives share a common reference node with Vin. Its drawback is that the transistor must block 2Vin when off, because of an autotransformer effect of the center-tapped primary. The topologies are compared in Table 13.3.5. The full-bridge converter perhaps offers the most straightforward operation. The switches always provide a path for magnetizing and leakage inductance currents, and circuit behavior is affected little by these extra inductances. In other circuits, these inductances are a significant complicating factor. For example, magnetizing inductance can turn on the primary-side diodes in a half-bridge converter, altering the operation of duty

FIGURE 13.3.17 Typical waveforms in inverter-rectifier cascade. (From Krein (1998), copyright  1998 Oxford University Press, Inc., New York, U.S.; used by permission.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.43

FIGURE 13.3.18 Four alternative forward converter topologies.

DC-DC CONVERTERS

13.43 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.44

DC-DC CONVERTERS 13.44

POWER ELECTRONICS

TABLE 13.3.5 Characteristics of Common Buck-Derived Forward Converters Forward Converter Topology

Transistor OffState Voltage

Full-bridge

Vin

Half-bridge

Vin

Single-ended

Vin

Push-pull

2Vin

Clamp

Vin + Vz

Flux Behavior Full variation from –fmax to +fmax Full variation

Variation only between 0 and +fmax Full variation

Variation only between 0 and +fmax

Comments Preferred for high power levels by many designers Capacitive divider avoids any dc offset in flux. Preferred at moderate power levels by many designers Less effective use of core, but only two transistors Common-ground gate drives. Timing errors can bias the flux and drive the core into saturation Similar to catch winding circuit, except that energy in magnetizing inductance is lost

ratio control. Leakage inductance is a problem in the push-pull circuit, particularly since both transistors must be off to establish the portion of time when no energy is transferred from input to output. Snubbers are necessary parts of this converter. These issues are discussed at length in at least one text (Kassakian, Schlecht, and Verghese, 1991). The boost converter also supports forward converter designs. A boost-derived push-pull forward converter is shown in Fig. 13.3.19. Like the boost converter, this circuit has an output higher than the input but now with a turns ratio. The operation differs from a buck-derived converter in an important way: both transistors must turn on to establish the time interval when no energy flows from input to output. In effect, each transistor has a minimum duty ratio of 50 percent, and control is provided by allowing the switching functions to overlap. The output duty ratio associated with each diode becomes 1 – D, where D is the duty ratio of one of the transistors over a full switching period. The output voltage is Vout =

N 2 Vin N1 2(1 − D)

The other forward converter arrangements also have boost-derived counterparts.

FIGURE 13.3.19 Boost-derived push-pull converter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(16)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.45

DC-DC CONVERTERS DC-DC CONVERTERS

13.45

In each of the converters discussed so far, it has been assumed that energy storage components have been sufficiently large to be treated as approximate current or voltage sources. This is not always the case. If values of inductance or capacitance are chosen below a certain value, the current or voltage, respectively, will reach zero when energy is extracted. This creates discontinuous mode behavior in a dc-dc converter. The values of L and C sufficient to ensure that this does not occur are termed critical inductance and critical capacitance, respectively. The usual effect of subcritical inductance is that all switches on the converter turn off together for a time. Subcritical capacitance in a boost-buck converter creates times when all switches are on together. In discontinuous mode, converter load regulation degrades, and closed-loop output control becomes essential. However, discontinuous mode can be helpful in certain situations. It implies fast response times since there is no extra time required for energy buildup. It provides an additional degree of freedom—the extra configuration when all switches are off or on—for control purposes. Discontinuous mode behavior can be analyzed through the techniques of this subsection, with the additional constraint that all energy in the storage element is removed during each switching period. The literature (Mitchell, 1988; Mohan, Undeland, and Robbins, 1995) provides a detailed analysis, including computation of critical inductance and capacitance.

RESONANT DC-DC CONVERSION TECHNIQUES The dc-dc converters examined thus far operate their switches in a square-wave or hard-switched mode. Switch action is strictly a function of time. Since switch action requires finite time, hard-switched operation produces significant switching loss. Resonant techniques for soft switching attempt to maintain voltages or currents at low values during switch commutation, thereby reducing losses. Zero-current switching (ZCS) or zero-voltage switching (ZVS) can be performed in dc converters by establishing resonant combinations. Resonant approaches for soft switching are discussed extensively in Kazimierczuk and Czarkowski (1995). The SCR supports natural zero-current switching, since turn-off corresponds to a current zero crossing. A basic arrangement, given in Fig. 13.3.20, is often used as the basis for soft switching in transistor-based dc-dc converters as well as for inverters. Starting from rest, the top SCR is triggered. This applies a step dc voltage 1 to the RLC set. If the quality factor of the RLC set is more than 2 , the current is underdamped, and will oscillate. When the current swings back to zero, the SCR will turn off with low loss. After that point, the lower SCR can be triggered for the negative half-cycle. The SCR inverter represents a series resonant switch configuration and provides ZCS action. However, SCRs are not appropriate for high-frequency dc-dc conversion because of their long switching times and control limitations. The circuit of Fig. 13.3.21 shows an interesting arrangement for dc-dc conversion, similar to the SCR circuit, but based on a fast MOSFET. In this case, an inductor and a capacitor have been added to a standard buck converter to alter the transistor action. Circuit behavior will depend on the relative values.

FIGURE 13.3.20 SCR soft-switching inverter. (From Krein (1998), copyright  1998 Oxford University Press, Inc., U.S.; used by permission.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.46

DC-DC CONVERTERS 13.46

POWER ELECTRONICS

FIGURE 13.3.21 A soft-switching arrangement for a buck converter.

If the capacitor Ct is large, its behavior during the transistor’s off interval will introduce opportunities for resonant switching. In this case, the basic circuit action is as follows: • When the transistor turns off, the input voltage excites the pair Lin and Ct. The input inductor current begins to oscillate with the capacitor voltage. The capacitor can be used to keep the transistor voltage low as it turns off. • The capacitor voltage swings well above Vin, and the main output diode turns on.

FIGURE 13.3.22 Input current and diode voltage in a resonant buck converter. (From Krein (1998), copyright  1998 Oxford University Press. Inc., U.S.; used by permission.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.47

DC-DC CONVERTERS DC-DC CONVERTERS

13.47

• The capacitor voltage swings back down. There might be an opportunity for zero-voltage turn-on of the transistor when the capacitor voltage swings back to zero. This represents ZVS action. When the transistor is on and the main diode is off, the input inductor forms a resonant pair with Cd. This pair provides an opportunity for zero-current switching at transistor turn-off, very much like the zero-current switch action in the SCR inverter. The action is as follows in this case: • When the transistor is turned on, Lin limits the rate of rise of current. The diode remains on initially, and the current builds up linearly because Vin appears across the inductor. • When the current arises to the level Iout, the diode shuts off and the transistor carries the full current. The pair Lin and Cd form a resonant LC pair, and the current oscillates. • The current rises above Iout because of resonant action, but then swings back down toward the origin. • When the current swings negative, the transistor’s reverse body diode begins to conduct, and the gate signal can be shut off. When the current tries to swing positive again, the switch will turn off. • The transistor on-time is determined by the resonant frequency and the average output current. Figure 13.3.22 shows the input current and main diode voltage for a choice of parameters that gives ZCS action in the circuit of Fig. 13.3.21. Resonant action changes the basic control characteristics substantially. The gate control in both ZVS and ZCS circuits must be properly synchronized to match the desired resonance characteristics. This means that pulse-width modulation is not a useful control option. Instead, resonant dc converters are adjusted by changing the switching frequency—in effect setting the portion of time during which resonant action is permitted. In a ZCS circuit, for example, the average output voltage can be reduced by dropping the gate pulse frequency. In general, ZCS or ZVS action is very beneficial for loss reduction. Lower switching losses permit higher switching frequencies, which in turn allow smaller energy storage elements to be used for converter design. Without resonant action, it is difficult to operate a dc-dc converter above perhaps 1 MHz. Resonant designs have been tested to frequencies as high as 10 MHz (Tabisz, Gradzki, and Lee, 1989). Designs even up to 100 MHz have been considered for aerospace applications. In principle, resonance provides size reduction of more than an order of magnitude compared to the best nonresonant designs. However, there is one important drawback: The oscillatory behavior substantially increases the on-state currents and off-state voltages that a switch must handle. Under some circumstances, the switching loss improvements of resonance are offset by the extra onstate losses caused by current overshoot. There are magnetic techniques to help mitigate this issue (Erickson, Hernandez, and Witulski, 1989), but they add complexity to the overall conversion system. The switching loss trade-offs tend to favor resonant designs at relatively low voltage and current levels. More sophisticated resonant design approaches, based on Class E methods (Kasimierczuk and Czarkowski, 1995), can further reduce losses. Example: Input-output relationships in a ZCS dc-dc converter.

Let us explore ZCS switching in a dc-dc converter and analyze the results. The approach in this example follows an analysis in Kassakian, Schlecht, and Verghese (1991). The circuit in Fig. 13.3.21 is the focus, with Ct selected to be small and Lout selected to be large. Parameters are Vin = 24 V, Ct = 200 pF, Lin = 2 mH, Cd = 0.5 mF, Lout = 50 mH, and a 10-Ω load in parallel with an 8 mF filter capacitor. The FET is supplied with a 5-ms pulse with a period of 12 ms. In periodic steady-state operation, the inductor Lout will carry a substantial current. The output time constant is long enough to ensure that the current will not change very much. As a result, the output inductor can be modelled as a current source, with value Iout. The diode provides a current path while the transistor is off. Consider the moment at which the transistor turns on. Since the current in Lin cannot change instantly, the diode remains on for a time while the input current rises. We have iin (t ) =

Vin t Lin

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(17)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.48

DC-DC CONVERTERS 13.48

POWER ELECTRONICS

until the current iin reaches the value Iout. At the moment ton = IoutLin/Vin, the diode current reaches zero and the diode turns off. The input circuit becomes an undamped resonant tank determined by Lin and Cd. For Cd, circuit laws require .. Vin – vCd(t) – LinCdvCd(t) = 0, vCd(ton) = 0, vCd(ton) = 0 (18) This has the solution

1 vCd (t) = Vin {1 – cos[wr(t – ton)]},

where wr =

(19)

LinC d

For the input current, the corresponding solution is iin (t ) = I out +

Vin sin[ω r (t − ton )] , Zc

where Z c =

Lin Cd

(20)

With the selected parameters, Zc = 2 Ω and wr = 106 rad/s, corresponding to about 160 kHz. The inductor current will cross zero again a bit more than one half-period after ton. In this example, Iout might be on the order of 1 A, so ton corresponds to only about 83 ns. The half-period of the resonant ring signal will be about 3.2 ms. Therefore, a 5-ms gate pulse should ensure that the transistor remains on until the zero crossing point. When the current crosses zero, the FET turns off, but its reverse body diode turns on and maintains negative flow for approximately another half-cycle, at approximately t = 6.4 ms. Since the gate signal is removed between the zero crossings, the FET and its diode will both turn off at the second zero crossing. Figure 13.3.22 is a SPICE simulation for these circuit parameters. The ZCS action should be clear: since the gate pulse is removed while the FET’s reverse body diode is active, the complete FET will shut off at a rising current zero crossing. The shut-off point toff is determined by 0 = I out +

Vin sin[ω r (toff − ton )] , Zc

 −I Z  ω r (toff − ton ) = sin −1  out   Vin 

(21)

In solving this expression, it is crucial to be careful about the quadrant for sin –1(x). The rising zero crossing is sought. Once the FET is off, capacitor Cd carries the full current Iout. The voltage vCd(t) will fall quickly with slope –Iout/Cd until the diode becomes forward biased and turns on. The voltage vCd(t) is of special interest, since the output is Vout = 〈vCd(t)〉. The average is 〈vCd 〉 =

t(diodeon)  toff 1 1 V ( − cos[ ω ( t − t )] dt + ∫ r on T  t∫ in t on

off

vCd (toff ) −

 I out t dt   Cd 

(22)

The second integral is a triangular area 1/2vCd(toff)2(Cd /Iout). For Iout ≈ 1 A, the time toff – ton can be found from Eq. (21) to be 6.20 ms. The value vCd(toff) is therefore 83 mV. The average value is Vout = 12.6 V. This corresponds to Iout = 1.26 A. In this example, the average value comes out very close to Vout =

4.8 × 10 −6 π T

(23)

for T > 6.4 ms. The solution comes out quite evenly because the current zero-crossing times nearly match those of the resonant sine wave. In the ZCS circuit, the on time of the transistor is determined by resonant action, provided the gate pulse turns off during a time window when reverse current is flowing through the device’s diode. The gate pulses need to have fixed duration to tune the circuit, but the pulse period can be altered to adjust the output voltage.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.49

DC-DC CONVERTERS DC-DC CONVERTERS

13.49

BIBLIOGRAPHY Baliga, B. J., “Modern Power Devices,” Wiley, 1987. Baliga, B. J., “Power Semiconductor Devices,” PWS, 1966. Chryssis, G. C., “High-Frequency Switching Power Supplies,” McGraw-Hill, 1989. Erickson, R. W., A. F. Hernandez, and A. F. Witulski, “A nonlinear resonant switch,” IEEE Trans. Power Electronics, Vol. 4, No. 2, pp. 242–252, 1989. Erikson, R. W., and D. Maksimovich, “Fundamentals of Power Electronics,” 2nd ed., Kluwer Academic Publishers, 2001. Hower, P. L., “Power semiconductor devices: An overview,” IEEE Proc., Vol. 82, pp. 1194–1214, 1994. Hudgins, J. L., “A review of modern power semiconductor devices,” Microelect. J., Vol. 24, pp. 41–54, 1993. Kassakian, J. G., M. F. Schlecht, and G. C. Verghese, “Principles of Power Electronics,” Addison-Wesley, 1991. Kazimierczuk, M. K., and D. Czarkowski, “Resonant Power Converters.” Wiley, 1995. Krein, P. T., “Elements of Power Electronics,” Oxford University Press, 1998. Portions used by permission. Massey, R. P., and E. C. Snyder, “High voltage single-ended dc-dc converter,” Record, IEEE Power Electronics Specialists Conf., pp. 156–159, 1977. Middlebrook, R. D., “Modeling current-programmed buck and boost regulators,” IEEE Trans. Power Electronics, Vol. 4, No. 1, pp. 36–52, 1989. Middlebrook, R. D., and S. C´ uk, “A new optimum topology switching dc-to-dc converter,” Record, IEEE Power Electronics Specialists Conf., pp. 160–179, 1977. Mitchell, D. M., “Dc-Dc Switching Regulator Analysis,” McGraw-Hill, 1988. Mohan, N., T. M. Undeland, and W. P. Robbins, “Power Electronics: Converters, Applications and Design,” 2nd ed., Wiley, 1995. Severns, R. P., and E. J. Bloom, “Modern dc-to-dc Switchmode Power Converter Circuits,” Van Nostrand Reinhold, 1985. Tabisz, W. A., P. M. Gradzki, and F. C. Y. Lee, “Zero-voltage-switched quasi-resonant buck and flyback converters—experimental results at 10 MHz,” IEEE Trans. Power Electronics, Vol. 4, pp. 194–204, 1989. Vithayathil, J., “Power Electronics: Principles and Applications,” McGraw-Hill, 1995.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.50

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 13.4

INVERTERS David A. Torrey

INTRODUCTION Inverters are used to convert dc into ac. This is accomplished through alternating application of the source to the load, achieved through proper use of controllable switches. This section reviews the basic principles of inverter circuits and their control. Four major applications of inverter circuits are also reviewed. Both voltage- and current-source inverters are used in practice. The trend, however, is to use voltage-source inverters for the vast majority of applications. Current-source inverters are still used at extremely high power levels, though voltage-source inverters are gradually filling even these applications. Because of the dominance of voltage-source inverters, this section focuses exclusively on this type of inverter. There are many issues involved in the design of an inverter. The more prominent issues involve the interactions among the power circuit, the source, the load, and the control. Other subtle issues involve the control of parasitics and the protection of controllable switches through the use of snubber and clamp circuits, and the juxtaposition of controller speed with the desire for increased switching frequency, while maintaining high efficiency. The technical literature contains abundant information on inverters. A set of technical papers is found in Bose (1992). In addition to technical papers, most power electronics textbooks have a section on inverters (Mohan, Undeland, and Robbins, 1995; Kassakian, Schlecht, and Verghese, 1991; Krein, 1998).

AN INVERTER PHASE-LEG An inverter phase-leg is shown in Fig. 13.4.1. It comprises two fully controllable switches and two diodes in antiparallel to the controllable switches. This phase-leg is placed in parallel with a voltage source. The center of the phase-leg is taken to the load. The basic circuit shown in Fig. 13.4.1 is usually augmented with a snubber circuit or clamp to shape the switching locus of the controllable switches. Insulated gate bipolar transistors (IGBTs) are shown as the controllable switches in Fig. 13.4.1. While the IGBT finds significant application in inverters, any fully controllable device, such as an FET or a GTO, may be used in its place; see Chapter 13.1 for a description of the fully controllable switches that can be used in an inverter. Basic Principles. In the phase-leg of Fig. 13.4.1, there are two restrictions on the use of the controllable switches. First, at most one controllable switch may be conducting at any time. The dc supply is shorted if both switches are conducting. In practice, one switch is turned off before the other is turned on. This blanking time, also known as dead time, compensates for the tendency of power devices to turn on faster than they turn off. Second, at least one controllable switch (or an associated diode) must be on at all times if the load current is to be nonzero. If the upper switch is conducting, the load is connected to the positive side of Vdc. If the lower switch is conducting, the load is connected to the negative side of Vdc. It follows that the voltage applied to the load will, on 13.50 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.51

INVERTERS INVERTERS

13.51

average, fall somewhere between 0 and Vdc. When the phase-leg of Fig. 13.4.1 is used with one or more additional phase legs, the load voltage can be made to alternate. The details of how it alternates is the responsibility of the controller. The peak voltage seen by each switch is the total dc voltage across the phase-leg. This is determined by recognizing that one of the two switches is always conducting. The peak current that must be supported by each switch is the peak current of the load. Under balanced control of the two switches, each switch must support the same peak current; each diode must support the same peak current as the switches. In an effort to improve the efficiency and spectral performance of inverters, the use of inverters with a FIGURE 13.4.1 An inverter phase-leg. resonant dc link has been reported in the technical literature (Divan, 1989; Murai and Lipo, 1988). Through periodic resonance of the dc bus voltage to zero, or the dc bus current to zero, the inverter switches can change states in synchronism with these zero crossings in order to reduce the switching losses in the power devices. Figure 13.4.2 shows a schematic for the basic resonant dc link converter (Divan, 1989). The resonance of Lr and Cr forces the bus voltage (the voltage applied to the controllable switches) to swing between zero and 2Vdc. The inverter switches change states when the voltage across Cr is zero. It is often necessary to hold the bus voltage at zero for a brief time to ensure that sufficient energy has been put into Lr to force resonance back to zero voltage across Cr. The bus can be clamped at zero voltage by turning on both switches in an inverter phaseleg. One drawback of the resonant dc link is the increased voltage or current imposed on the power devices. Auxiliary clamp circuits have been implemented to minimize this drawback (Divan and Skibinski, 1989; He and Mohan, 1991; Simonelli and Torrey, 1994). The control of a resonant link inverter is complicated by the simultaneous need to manage energy flow to the load while managing energy in the resonant link. Snubber Circuits and Clamps. Snubber circuits are used to control the voltage across and the current through a controllable switch as that device is turning on or off. A complete snubber will typically limit the rate of rise in current as the device is turning on and limit the rate of rise in voltage as the device is turning off. Additional circuit components are used to accomplish this shaping of the switching locus. Figure 13.4.3 shows three snubber circuits that are used with inverter legs, one of which has auxiliary switches (McMurray, 1987; McMurray, 1989;

FIGURE 13.4.2 A basic resonant dc link inverter system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.52

INVERTERS 13.52

POWER ELECTRONICS

FIGURE 13.4.3 A number of snubber circuits for an inverter phase-leg: (a) a McMurray snubber (McMurray, 1987); (b) a resonant snubber with auxilliary switches (McMurray, 1989); and (c) the Undeland snubber (Undeland, 1976).

Undeland, 1976). Snubber circuits become increasingly important as the power rating of the inverter increases, where the additional cost of small auxiliary devices is justified by improved efficiency and spectral performance. Clamps differ from snubbers in that a clamp is used only to limit a switch variable, usually the voltage, to some maximum value. The clamp does not dictate how quickly this maximum value is attained. Figure 13.4.4 shows three clamp circuits that are commonly used with inverter legs. The clamp circuits generally become more complex with increasing power level. Interfacing to Controllable Switches. The interface to the controllable switches within the inverter phase-leg of Fig. 13.4.2 requires careful attention. Power semiconductor devices are generally controlled by manipulating the control terminal, usually relative to one of the power terminals. For example, Figure 13.4.2 shows insulated gate bipolar transistors (IGBTs) as the controllable switches. These devices are turned on and off through the voltage level applied to the gate relative to the emitter. Because the emitter of the upper IGBT moves around, the control of the upper IGBT must accommodate this movement. The emitter voltage of the upper IGBT moves from the positive side of the dc bus when the IGBT is conducting to the negative side of the dc bus when Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.53

INVERTERS INVERTERS

13.53

FIGURE 13.4.4 Clamp circuits for an inverter phase-leg: (a) low power (~ 50 A); (b) medium power (~ 200 A); and (c) high power (~300 A).

the lower IGBT is conducting. The circuit responsible for turning the power semiconductor on and off as commanded by a controller is usually known as a gate-drive circuit. Some gate-drive circuits incorporate protection mechanisms for over current or over temperature. There are a number of approaches that are used to control the semiconductor devices within the inverter. The choice among these approaches is often dictated by the power levels involved, the switching frequency supported by the switches, and the preference of the designer, among others. It is possible to purchase highvoltage integrated circuits (HVICs) that perform the level shifting necessary to take a logic signal referenced to the negative side of the dc bus and control the upper controllable switch in the phase-leg. This approach is generally limited to applications where the controllable switches do not require a negative bias to hold them in the blocking state. High-frequency applications may use transformer-coupled gate drives that are insensitive to the common-mode voltage between the primary and secondary. This approach runs into problems at lower frequencies because the size of the transformer core begins to get large. High-power applications may use optocouplers to optically couple the control information to the gate drive, where the gate drive is supported by an isolated power supply. Often the power supply is the dominant factor in the overall cost of the gate drive. Figure 13.4.5a shows how an HVIC is interfaced to a phase-leg. The capacitors are used as local power supplies for the upper and lower gate drives. In this implementation, the upper capacitor is charged through the diode while the lower switch is conducting. Figure 13.4.5b shows a transformer-coupled gate drive for a highfrequency application (Internatonal Rectifier). It is important to design a mechanism for resetting the core in Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.54

INVERTERS 13.54

POWER ELECTRONICS

FIGURE 13.4.5 Three common approaches to interfacing to controllable switches: (a) the use of an HVIC; (b) the use of transformer coupling; and (c) the use of an optocoupler.

a transformer-coupled gate drive. In Fig. 13.4.5b, the core is reset by driving the transformer primary with a bipolar voltage that provides sufficient volt-seconds to drive the transformer into saturation. This need for core reset may place unacceptable limitations on the duty ratio of the switches for some applications. Figure 13.4.5c shows the use of an optocoupler to provide isolation of the control signal going to the gate drive. The isolated power supply required to support the use of the optocoupler is not shown.

SINGLE-PHASE INVERTERS There are two ways to form a single-phase inverter. The first way is shown in Fig. 13.4.6, where the phase leg of Fig. 13.4.1 is used to control the voltage applied to one side of the load. The other side of the load is connected to the common node of two voltage sources. The half-bridge inverter of Fig. 13.4.6 applies positive voltage to the load when the upper switch is conducting and negative voltage to the load when the lower switch is conducting. It is not possible for the half-bridge inverter to apply zero voltage to the load.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.55

INVERTERS INVERTERS

13.55

FIGURE 13.4.6 A single-phase inverter using one phase-leg and two dc voltage sources.

A single-phase inverter is also formed by placing the load between two inverter phase-legs, as shown in Fig. 13.4.7. This circuit is often referred to as a full- or H-bridge inverter. Through appropriate control of the inverter switches, positive, negative, and zero voltage can be applied to the load. The zero voltage state is achieved by having both upper switches, or both lower switches, conducting at the same time. Note that this state requires one switch and one diode to be supporting the load current.

THREE-PHASE INVERTERS A three-phase inverter is used to support both ∆- and Y-connected three-phase loads. The three-phase inverter topology can be derived by using three single-phase full-bridge inverters, with each inverter supporting one phase of the load. Upon careful examination of the resulting connection of the 12 controllable switches with the load, it is seen that there are six redundant switches because phase-legs are connected in parallel. Elimination of the six redundant switches yields the topology shown in Fig. 13.4.8. There are six switches in the three-phase inverter topology of Fig. 13.4.8. Considering all combinations of the switch states, there are seven possible voltages that may be applied to the load; the cases of all three upper switches being closed and all three lower switches being closed are functionally indistinguishable. The controller is responsible for applying the appropriate voltage to the load according to the method used for synthesizing the output voltage.

FIGURE 13.4.7 A single-phase inverter using two phase-legs and one dc voltage source.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.56

INVERTERS 13.56

POWER ELECTRONICS

FIGURE 13.4.8 The three-phase inverter topology.

MULTILEVEL INVERTERS The three-phase inverter of Fig. 13.4.8 applies one of two voltages to each output load terminal. The output voltage is Vdc when the upper switch is conducting, and it is zero when the lower switch is conducting. Accordingly, this inverter could be called a two-level inverter. Multilevel inverters are based on extending this concept and are becoming the inverter of choice for higher-voltage and higher-power applications. The discussion here focuses on a three-level inverter; extensions to five or more levels can be found in the technical literature (Lai and Peng, 1996; Peng, 2001). Among their advantages, multilevel inverters allow the synthesis of voltage waveforms that have a lower harmonic content than a two-level inverter for the same switching frequency. This is because each output terminal is switched among at least three voltages, not just two. In addition, the input dc bus voltage can be higher because multiple devices are connected in series to support the full bus voltage. Figure 13.4.9 shows one phase-leg of a three-level inverter. In the three-level inverter the dc bus is partitioned into two equal levels of Vdc/2. The four controllable switches with antiparallel diodes, S1 through S4, are connected in series to form a phase-leg. In addition, two steering diodes, D5 and D6, are used to support current flow to and from the midpoint of the dc bus. Additional phase-legs would be connected in parallel across the full dc bus. Operation of four switches is used to connect the load output terminal to either Vdc, Vdc/2, or zero. Switches S1 and S2 are used to connect the load to Vdc, switches S2 and S3 are used to connect the load to Vdc/2, and switches S3 and S4 are used to connect the load to zero. While switches S1 and S2 are conducting, diode D6 ensures that the voltage across switch S4 does not exceed Vdc/2. Similarly, when switches S3 and S4 are conducting, diode D5 ensures that the voltage across switch S1 does not exceed Vdc/2. While switches S2 and S3 are conducting, the voltage across both S1 and S4 is clamped at Vdc/2; diode D5 and switch S2 support positive load current, while diode D6 and switch S3 support negative load current. A three-phase, three-level inverter provides substantially increased flexibility for voltage synthesis over the conventional three-phase inverter of Fig. 13.4.8. The three-phase inverter of Fig. 13.4.8 offers eight switch combinations that support seven different voltage combinations among the three output terminals. A threephase, three-level inverter offers 27 switch combinations, supporting 19 different voltage combinations among the three output terminals. Further, the increased redundancy of certain voltages provides additional degrees of freedom in designing the voltage synthesis algorithm. These additional degrees of freedom could, for example, be used to minimize the common mode voltage between the three outputs. Issues within the design and control of the multilevel inverter include the dynamic balancing of the voltages within each level of the dc bus and the switching patterns needed to best synthesize the desired output voltage. One would expect that symmetric operation of the phase-leg should be sufficient for maintaining balanced voltages across each level of the dc bus. The variation in capacitor values, however, will cause the midpoint of the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/29/04

7:08 PM

Page 13.57

INVERTERS INVERTERS

13.57

FIGURE 13.4.9 A phase-leg for a three-level inverter.

dc bus to move to a voltage other than Vdc/2 for symmetric load currents. This shift in voltage will have repercussions on the synthesis of the output voltage.

VOLTAGE WAVEFORM SYNTHESIS TECHNIQUES There are three principal ways to synthesize the output voltage waveform in an inverter: harmonic elimination, harmonic cancellation, and pulse-width modulation. The synthesis technique that is applied is generally driven by consideration of the required output quality, the inverter power rating (which is closely tied to the speed of the controllable switches), the computational power of the available controller, and the acceptable cost of the inverter. This subsection reviews some of the common techniques used to synthesize the inverter output voltage.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.58

INVERTERS 13.58

POWER ELECTRONICS

FIGURE 13.4.10 Elimination of the third harmonic.

Harmonic Elimination. Harmonic elimination implies that the output waveform shape is controlled to be free of specific harmonics through the selection of switch transitions (Patel and Hoft, 1973; Patel and Hoft, 1974). That is, the switches are controlled so that one or more harmonics are never generated. This is often accomplished by notching the output waveform. Examples of harmonic elimination are shown in Figs. 13.4.10 and 13.4.11, which respectively show the elimination of only the third harmonic and simultaneous elimination of the third and fifth harmonics from the output of a single-phase inverter. As suggested by Figs. 13.4.10 and 13.4.11, additional switch transitions must be inserted in the output waveform for each harmonic that is to be eliminated. As the number of notches gets very large, the output voltage waveform begins to resemble something which could be produced by pulse-width modulation. Harmonic

FIGURE 13.4.11 Simultaneous elimination of the third and fifth harmonics: (a) shows the fifth harmonic superimposed on the waveform of Fig. 13.4.10: (b) shows the switch transitions introduced to eliminate the fifth harmonic without reintroducing the third harmonic.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.59

INVERTERS INVERTERS

13.59

FIGURE 13.4.12 The superposition of two waveforms to cancel the third harmonic.

elimination is sometimes referred to as programmed PWM because the switching angles of the output voltage are programmed according to the intended harmonic content (Enjeti, Ziogas, and Lindsay, 1985). Harmonic Cancellation. Harmonic cancellation uses the superposition of two or more waveforms to cancel undesired harmonics (Kassakian, Schlecht, and Verghese, 1991). Figure 13.4.12 shows how two waveforms that contain the third harmonic may be phase-shifted and superimposed in order to create a waveform that is free of the third harmonic. The circuit of Fig. 13.4.13 can be used to synthesize the waveform of Fig. 13.4.12. By combining harmonic cancellation and harmonic elimination, it is possible to create relatively high-quality voltage waveforms. This quality comes at the expense of a more complicated circuit. This additional complexity may be warranted depending on the power level and the specified quality. Pulse-Width Modulation. Pulse-width modulation (PWM) is a method of voltage synthesis through which high-frequency voltage pulses are applied to the inverter load (Holtz, 1994). The width of the pulses are made to vary at the desired frequency of the output voltage. Successful application of PWM generally involves a wide frequency separation between the carrier frequency and the modulation frequency. This frequency separation moves the distortion in the output voltage to high frequencies, thereby simplifying the required filtering. This subsection reviews two of the more common techniques used for synthesizing voltage waveforms using modulation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.60

INVERTERS 13.60

POWER ELECTRONICS

FIGURE 13.4.13 A circuit capable of synthesizing the waveforms of Fig. 13.4.12.

Sinusoidal PWM. Sinusoidal PWM implies that the pulse widths of the output voltage are distributed sinusoidally. The pulse widths are generally determined by comparing a sinusoidal reference waveform with a triangular waveform. The sinusoidal waveform sets the modulation (output) frequency and the triangular waveform sets the switching frequency. Sinusoidal PWM is routinely applied to single- and three-phase inverters. In a single-phase inverter, the implementation of sinusoidal PWM depends on whether or not both phaselegs are operated at high frequency. Referring to Fig. 13.4.7, we see that it is not necessary for both phase-legs to operate at high frequency. We could, for example, operate switches S1 and S2 at high frequency to control the shape of the voltage, while switches S3 and S4 are operated at the frequency of the reference sinusoid to dictate the polarity of the output voltage. One advantage of this approach is that the inverter is more efficient because only two of the switches are operated at high frequency. Figure 13.4.14 shows how the sinusoidal pulse widths are created by a comparator and a unipolar triangular carrier (Kassakian, Schlecht, and Verghese, 1991). An alternative approach is to use switches S2 and S4 to control the polarity of the output voltage. While switch S4 is conducting, switches S1 and S2 are used to control the shape of the voltage. Similarly, switches S3 and S4 are used to control the shape of the voltage, while switch S2 is conducting. This approach tends to equalize the stress on the two phase-legs, and can simplify the control logic necessary to implement the PWM.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.61

INVERTERS INVERTERS

13.61

FIGURE 13.4.14 The generation of sinusoidally distributed pulse widths using a unipolar triangular carrier.

A second way of implementing sinusoidal PWM in a single-phase inverter is to operate both phase-legs at high frequency (Vithayathil, 1995). This method of control gives the same output voltage waveform as the highfrequency/low-frequency approach. The basic difference in control structures between the two is that the first method uses a unipolar triangular carrier, while the second way uses a bipolar triangular carrier. Figure 13.4.15 shows how the bipolar triangular carrier is used to create the sinusoidally distributed pulse widths. Space-Vector PWM. Space-vector modulation is a technique that is becoming the standard method for controlling the output voltage of three-phase inverters. The technique bears great similarity to the field-oriented control techniques which are applied to ac electric machines (Van der Broeck, Skedulny, and Stanke, 1988; Holtz, Lammert, and Lotzkat, 1986; Trzynadlowski and Legowski, 1994). A balanced set of three-phase quantities can be transformed into direct and quadrature components through the transformation  xd   =  x q 

2 . 1  3  0

−1/2   3/2 − 3/2 

−1/2

 xa     xb     xc 

FIGURE 13.4.15 The generation of sinusoidally distributed pulse widths using a bipolar triangular carrier.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(1)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.62

INVERTERS 13.62

POWER ELECTRONICS

FIGURE 13.4.16 The eight space vectors that can be produced by the three-phase inverter of Fig. 13.4.8.

where x is a voltage or current. A similar transformation exists for taking direct and quadrature components back to phase quantities, though space-vector modulation does not need to use the inverse transformation. Applying Eq. (1) to the three-phase inverter of Fig. 13.4.8, we see that each distinct switch state of the inverter corresponds with a different space vector. The zero vector can be produced by the electrically equivalent topologies of all upper switches conducting and all lower switches conducting. It is important to note that the six nonzero space vectors created by the inverter states are of the same magnitude (√2/3Vdc) and are symmetrically displaced. Figure 13.4.16 shows the connection between the three-phase inverter of Fig. 13.4.8 and the generation of the eight space vectors. Any desired output voltage, up to the magnitude of Vdc/ √2 may be synthesized by taking the three adjacent space vectors in proper proportion. Figure 13.4.17 shows how the desired voltage V* is synthesized from the space vectors V1, V2, and V0. Over one sampling interval, the duty ratios of V1, V2, and V0 are, respectively, d1 = d2 =

2 / 3| V * | sin (60° − γ ) 2 / 3Vdc 2 / 3 |V * | sin γ 2 / 3Vdc

d0 = 1 – d1 – d2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(2)

(3) (4)

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.63

INVERTERS INVERTERS

13.63

The order used in implementing the space vectors is driven by the desire to minimize the number of switching operations. Careful examination of Figs. 13.4.16 and 13.4.17 reveals that within any of the six segments delimited by space vectors, the move from one vector to the next requires changing the state of only one switch. In practice, the switching in adjacent sampling intervals would apply the sequence … |V1|V2|V0|V0|V2|V1| …. Different approaches use different criteria for selecting the best implementation of V0 (Trzynadlowski and Legowski, 1994). Extension of the space-vector concept is possible with multilevel inverters (Holmes and McGrath, 2001; Tolbert, Peng, and Habetler, 2000). Multilevel inverters, however, offer a substantially greater number of space vectors from which to choose. With three-level inverters, for example, there are now 19 different space vectors. FIGURE 13.4.17 The synthesis of voltage V* using space Additional levels would increase the number of space vector modulation. vectors still further. The number of redundant space vectors also increases in multilevel inverters, thereby offering additional degrees of freedom within the voltage synthesis algorithm. Figure 13.4.18 shows the space vectors that can be created by a three-phase three-level inverter based on the phase-leg of Fig. 13.4.12. The numbers adjacent to each space vector represent the switch configuration of phases a, b, and c, respectively. Space vectors with more than one set of numbers can be achieved with any of the switch combinations indicated. Referring to Fig. 13.4.12, a 0 indicates that switches S3 and S4 are connecting the output terminal to the negative side of the dc bus. Similarly, a 1 indicates switches S2 and S3 are connecting the output terminal to the midpoint of the dc bus. Finally, a 2 indicates that switches S1 and S2 are connecting the load terminal to the positive side of the dc bus.

FIGURE 13.4.18 The achievable space vectors associated with a three-phase three-level inverter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.64

INVERTERS 13.64

POWER ELECTRONICS

FIGURE 13.4.19 The principles of hysteretic current control.

CURRENT WAVEFORM SYNTHESIS TECHNIQUES While voltage-source inverters always output a voltage waveform, there are many applications where the details of the voltage waveform are driven by the creation of a current with a specific shape. In this context, the controller is determining the state of each inverter switch based on how well the inverter output currents are tracking commanded output currents. While the PWM techniques of the previous subsection can often be applied to transform a voltage-source inverter into a controlled current source (Brod and Novotny, 1985; Habetler, 1993), there are some additional techniques that are useful in this type of operation. The control techniques described in this subsection are amenable to synthesizing current waveforms with inverters. Hysteresis and Sliding-Mode Control. Hysteresis and sliding-mode control are very similar in nature. In both of these control approaches, a reference current waveform is established and the switching of the inverter is tied to the relative location of the actual current and the reference waveform (Brod and Novotny, 1985; Bose, 1990; Slotine and Li, 1991; Torrey and Al-Zamel, 1995). Under hysteretic control, a hysteresis band is introduced around the reference waveform in order to limit the switching frequency. Figure. 13.4.19 illustrates the principles of hysteretic control. Sliding-mode control can be implemented in a manner which is indistinguishable from hysteretic control, or it can be implemented as shown in Fig. 13.4.20 where there is a known upper limit on the switching frequency. A common problem with hysteretic control is that the switching frequency is not fixed and may vary widely over one cycle of the output. This can complicate the design of filters and may raise reliability concerns relative

FIGURE 13.4.20 One method of implementing sliding-mode control.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.65

INVERTERS INVERTERS

13.65

to the safe operation of the switches. Fixing the switching frequency is possible (Kazerani, Ziogas, and Joos, 1991), at the expense of a hysteresis band that changes throughout each cycle of the output. Predictive Current Regulation. Predictive current regulation is similar to hysteresis and sliding-mode control in the establishment of a reference current and acceptable error bounds (Holtz, 1994; Wu, Dewan, and Slemon, 1991). The predictive controller uses a model of the system in conjunction with measurements of the system state in order to predict how long the next switching state is to be maintained so that the actual current remains within the established error bounds. In contrast to hysteresis and sliding-mode control, the predictive controller is always looking ahead one sampling interval into the future.

INVERTER APPLICATIONS This subsection reviews four important applications of inverters: uninterruptible power supplies, motor drives, active power filters, and utility interfaces for distributed generation. Uninterruptible power supplies have become an extremely large market in support of the expanding use of personal computers and other critical loads. These systems are able to support computer operation in the face of unreliable utility power, thereby preventing the loss of data. Motor drives allow for adjustable speed operation of electric motors, thereby providing a better match between the motor output and the power required by the load. The proper application of adjustable speed drives can result in significant energy savings. The increasing application of active power filters and active power line conditioners is a reflection of increasing harmonic distortion on power systems, and the regulatory responses to this distortion. Distributed generation sources (fuel cells, solar photovoltaics, wind turbines, and microturbines) often function as sources of dc power, thereby requiring an inverter to deliver this power to the ac utility grid. Uninterruptible Power Supplies. An uninterruptible power supply (UPS) uses a battery to provide energy to a critical load in the event of a power system disturbance. There are two basic topologies used in UPS systems, as shown in the block diagrams of Figs. 13.4.21 and 13.4.22 (Mohan, Undeland, and Robbins, 1995). Both single- and three-phase UPS systems are available. In Fig. 13.4.21, the critical load is always supplied through an inverter. This inverter is fed from either the utility or the battery bank, depending on the availability of the utility. The battery is continually charged when the utility is present. This type of UPS provides effective protection of the critical load by isolating it from utility under- and over-voltages. In Fig. 13.4.22, the functions of battery charging and the inverter are combined. While the utility is present, the inverter is run as a controlled rectifier to support battery charging. When the utility fails, the load is supplied from the battery-fed inverter. Because an inverter does not have the ability to deliver a larger voltage than that of the dc source, it may be necessary to include a bidirectional dc/dc converter if a low voltage battery is used in the UPS. Motor Drives. Motor drives can be found in a very wide range of power levels, from fractional horsepower up to hundreds of horsepower (Bose, 1986; Murphy and Turnbull, 1988). These applications range from very

FIGURE 13.4.21 A block diagram for one configuration of a UPS system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.66

INVERTERS 13.66

POWER ELECTRONICS

FIGURE 13.4.22 A block diagram for a second UPS system configuration.

precise motion control to adjustable speed operation of pumps and compressors for saving energy. In a motor drive, the inverter is used to provide an adjustable frequency ac voltage to the motor, thereby enabling the motor to operate over a wide range of speeds without derating the torque production of the motor. In order to prevent the motor from being pushed into magnetic saturation, the amplitude of the synthesized voltage is usually tied to the output frequency. In the simplest adjustable speed drives, the ratio of peak output voltage to output frequency is maintained nominally constant, with a small boost at low frequencies to compensate to the resistance of the motor windings. More sophisticated adjustable speed drives implement sensorless flux vector control (Bose, 1997). Active Power Filters. Active power filters are able to simultaneously provide compensation for the reactive power drawn by other linear loads while compensating for the harmonic currents being drawn by still other nonlinear loads (Gyugyi and Strycula, 1976; Akagi, Kanazawa, and Nabae, 1984; Akagi, Nabae, and Atoh, 1986; Torrey and Al-Zamel, 1995). It is possible to compensate for multiple loads at one point. The basic idea of an active power filter is shown in Fig. 13.4.23. The active power filter is formed by putting an inverter in parallel with the loads for which compensation is needed. The inverter switches are then controlled to force the line current drawn from the utility to be of the desired quality. The inverter is controlled to draw currents that precisely compensate for the undesired components in the currents drawn by the loads. The undesired components may be either reactive or harmonic in nature. In Fig. 13.4.23, iutility is forced to be of the desired quality and phase through the superposition of ifilter and inonlinear loads. With control over ifilter, the utility current can be forced to track its intended shape and amplitude.

FIGURE 13.4.23 A one-line diagram of an active filter system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.67

INVERTERS INVERTERS

13.67

Distributed Generation. There is an ever-increasing interest in the integration of distributed generation sources into the electric utility system. Distributed generation sources include fuel cells, solar photovoltaics, wind energy, hydroelectric, and microturbines, among others. Their application is sometimes driven by the utility in an effort to use a local resource such as hydroelectric energy, to increase generating capacity without having to increase the capacity of their transmission lines, to add generation capacity incrementally without the large capital investment required of a more traditional generating station, or to increase the reliability of the supply for critical customers. Distributed generation sources are sometimes used by electricity customers to reduce their electricity costs or to increase the reliability of their electricity supply. Some distributed generation sources naturally provide energy through dc, thereby requiring an inverter to deliver the energy to the utility grid. Fuel cells and solar photovoltaics fall into this category. Other distributed generation sources provide energy through ac with variable frequency and amplitude. Wind turbines, microturbines, and some hydroelectric systems fall into this category. This ac energy is usually delivered to the utility by first rectifying the variable ac into dc and then using an inverter to provide fixed frequency ac with a fixed amplitude to the utility grid. In some cases the rectification process is facilitated by an inverter structure where the flow of energy is from the ac side to the dc side. In this way phase currents can be controlled far more precisely than would be possible with an uncontrolled rectifier. A significant issue with the deployment of distributed generation sources is the prevention of a situation known as islanding. An inverter that is unable to detect the presence or absence of the utility may continue to feed a section of the utility system even after the utility has taken actions to deenergize that section. This creates a serious safety issue for any utility workers who may be working on the utility system. For this reason, any inverter that is designed to interact with the ac utility must include anti-islanding controls that actively and continuously verify the presence of the larger ac utility system. Techniques for accomplishing this are described in Stevens et al. (2000) for photovoltaic systems, but the techniques described are applicable to other energy sources. Inverters for distributed generation systems are designed to be either utility interactive or utility independent. The difference between them is found in the control. Utility interactive inverters are controlled to behave as a current source, delivering power to the utility with near-unity power factor. Utility independent inverters behave as voltage sources, where the phase difference between the output voltage and the load current is dictated by the load on the inverter.

REFERENCES 1. Akagi, H., Y. Kanazawa, and A. Nabae, “Instantaneous reactive power compensators comprising switching devices without energy storage components,” IEEE Trans. Ind. Appl., Vol. IA-20, pp. 625–630, 1984. 2. Akagi, H., A. Nabae, and S. Atoh, “Control strategy of active power filters using multiple voltage-source PWM converters,” IEEE Trans. Ind. Appl., Vol. IA-22, pp. 460–465, 1986. 3. Bose, B. K., “An adaptive hysteresis-band current control technique of a voltage-fed PWM inverter for machine drive system,” IEEE Trans. Ind. Electron., Vol. 37, pp. 402–408, 1990. 4. Bose, B. K. ed., “Modern Power Electronics: Evolution, Technology, and Applications,” IEEE Press, 1992. 5. Bose, B. K. ed., “Power Electronics and Variable Frequency Drives,” IEEE Press, 1997. 6. Brod, D. M., and D. W. Novotny, “Current control of VSI-PWM inverters,” IEEE Trans. Ind. Appl., Vol. IA-21, pp. 562–570, 1985. 7. Divan, D. M., “The resonant dc link converter—a new concept in static power conversion,” IEEE Trans. Ind. Appl., Vol. 25, pp. 317–325, 1989. 8. Divan, D. M., and G. Skibinski, “Zero-switching-loss inverters for high-power applications,” IEEE Trans. Ind. Appl., Vol. 25, pp. 634–643, 1989. 9. Enjeti, P. N., P. D. Ziogas, and J. L. Lindsay, “Programmed PWM techniques to eliminate harmonics: A critical evaluation,” IEEE Trans. Ind. Appl., Vol. 26, pp. 302–316, 1985. 10. Gyugyi, L., and E. C. Strycula, “Active ac power filter,” IEEE/IAS Annual Meeting Conference Record, pp. 529–535, 1976. 11. He, J., and N. Mohan, “Parallel resonant dc link circuit—a novel zero switching loss topology with minimum voltage stresses,” IEEE Trans. Power Electron., Vol. 6, pp. 687–694, 1991.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.68

INVERTERS 13.68

POWER ELECTRONICS

12. Habetler, T. G., “A space vector-based rectifier regulator for ac/dc/ac converters,” IEEE Trans. Power Electron., Vol. 8, pp. 30–36, 1993. 13. Holmes, D. G., and B. P. McGrath, “Opportunities for harmonic cancellation with carrier-based PWM for a two-level and multilevel cascaded inverters,” IEEE Trans. Ind. Appl., Vol. 37, pp. 574–582, 2001. 14. Holtz, J., P. Lammert, and W. Lotzkat, “High-speed drive system with ultrasonic MOSFET PWM inverter and singlechip microprocessor control,” IEEE/IAS Annual Meeting Conference Record, pp. 12–17, 1986. 15. Holtz, J., “Pulsewidth modulation for electronics power conversion,” IEEE Proc., Vol. 82, pp. 1194–1214, 1994. 16. International Rectifier, “Transformer-isolated gate driver provides very large duty cycle ratios,” Application Note AN-950. Available through URL http://www.irf.com/technical-info/appnotes.htm. 17. Kazerani, M., P. D. Ziogas, and G. Joos, “A novel active current waveshaping technique for solid-state input power factor conditioners,” IEEE Trans. Ind. Electron., Vol. 38, pp. 72–78, 1991. 18. Kassakian, J. G., M. F. Schlecht, and G. C. Verghese, “Principles of Power Electronics,” Addison-Wesley, 1991. 19. Krein, P. T., “Elements of Power Electronics,” Oxford University Press, 1998. 20. Lai, J.-S., and F. Z. Peng, “Multilevel converters—a new breed of power converters,” IEEE Trans. Ind. Appl., Vol. 32, pp. 509–517, 1996. 21. McMurray, W., “Efficient snubbers for voltage-source GTO inverters,” IEEE Trans. Power Electron., Vol. PE-2, pp. 264–272, 1987. 22. McMurray, W. “Resonant snubbers with auxiliary switches,” IEEE/IAS Annual Meeting Conference Record, pp. 829–834, 1989. 23. Mohan, N., T. M. Undeland, and W. P. Robbins, “Power Electronics: Converters, Applications and Design,” 2nd ed., John Wiley & Sons, 1995. 24. Murai, Y., and T. A. Lipo, “High frequency series resonant dc link power conversion,” IEEE/IAS Annual Meeting Conference Record, pp. 772–779, 1998. 25. Patel, H. S., and R. G. Hoft, “Generalized techniques of harmonic elimination and voltage control in thyristor inverters: Part I–Harmonic elimination techniques,” IEEE Trans. Ind. Appl., Vol. IA-9, pp. 310–317, 1973. 26. Patel, H. S., and R. G. Hoft, “Generalized techniques of harmonic elimination and voltage control in thyristor inverters: Part II–Voltage control techniques,” IEEE Trans. Ind. Appl., Vol. IA-10, pp. 666–673, 1974. 27. Peng, F. Z., “A generalized multilevel inverter topology with self voltage balancing,” IEEE Trans. Ind. Appl., Vol. 37, pp. 611–618, 2001. 28. Simonelli, J. M., and D. A. Torrey, “An alternative bus clamp for resonant dc link converters,” IEEE Trans. Power Electron., Vol. 9, pp. 56–63, 1994. 29. Slotine, J. J., and W. Li, “Applied Nonlinear Control,” Prentice Hall, 1991. 30. Stevens, J., R. Bonn, J. Ginn, S. Gonzalez, and G. Kern, “Development and testing of an approach to anti-islanding in utility-interconnected photovoltaic systems,” Report SAND 2000-1939, Sandia National Laboratories, August 2000. 31. Tolbert, L. M., F. Z. Peng, and T. G. Habetler, “Multilevel PWM methods at low modulation indices,” IEEE Trans. Power Electron., Vol. 15, pp. 719–725, 2000. 32. Torrey, D. A., and A. M. Al-Zamel, “Single-phase active power filters for multiple nonlinear loads,” IEEE Trans. Power Electron., Vol. 10, pp. 263–272, 1995. 33. Trzynadlowski, A. M., and S. Legowski, “Minimum-loss vector PWM strategy for three-phase inverters,” IEEE Trans. Power Electron., Vol. 9, pp. 26–34, 1994. 34. Undeland, T. M., “Switching stress reduction in power transistor converters,” IEEE/IAS Annual Meeting Conference Record, pp. 383–392, 1976. 35. Van der Broeck, H. W., H. C. Skudelny, and G. Stanke, “Analysis and realization of a pulse width modulator based on space vector theory,” IEEE Trans. Ind. Appl., Vol. IA-24, pp. 142–150, 1988. 36. Vithayathil, J., “Power Electronics: Principles and Applications,” McGraw-Hill, 1995. 37. Wu, R., S. Dewan, and G. Slemon, “Analysis of a PWM ac to dc voltage source converter under predicted current control with fixed switching frequency,” IEEE Trans. Ind. Appl., Vol. 27, pp. 756–764, 1991.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.69

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 13.5

AC REGULATORS Peter Wood

CIRCUITS FOR CONTROLLING POWER FLOW IN AC LOADS Switch configurations such as those in Fig. 13.5.1 can be used to control ac waveforms. The control may be merely transistory, as in soft starting an induction motor or limiting the inrush current to a transformer, or perpetual, as in the control of resistive heating elements, incandescent lamps, and the reactors of a static reactive volt-ampere (VAR) generator. The basic single-phase ac regulator is depicted in Fig. 13.5.2, using a triac as the ac switch (but any of the ac switch combinations shown in Fig. 13.5.1 is applicable). The various three-phase arrangements possible are shown in Fig. 13.5.3. The first of these, the wye-connected regulator with a neutral connection (Fig. 13.5.3a), exhibits behavior identical to that of the single-phase regulator, since it is merely a threefold replica of the single-phase version. The delta-connected regulator arrangement of Fig. 13.5.3b is also essentially similar in behavior to the single-phase regulator insofar as load voltages and currents are concerned. Because of the delta connection, however, any symmetrical zero sequence components of the load currents will not flow in the supply lines but will only circulate in the delta-connected loads. The three-phase three-wire wye-switched regulator of Fig. 13.5.3c behaves differently because two switches must be closed for current to flow in any load. Shown delta-loaded, it may also have the loads wyeconnected without a neutral return. In this connection, each ac switch may consist of the antiparallel combination of a thyristor and a diode. The normal wye-delta transformations apply to load voltages and currents. The “British delta” circuit of Fig. 13.5.3d behaves in the same way as a wye-switched regulator in which thyristors with inverse parallel connected diodes are used as the switches and is unique in that only unidirectional current capability is required of its switches. When the loads are essentially resistive, two methods of control are currently employed. The technique known as integral-cycle control operates the regulator by keeping the switch(es) closed for some number m of complete cycles of the supply and then keeping the switch(es) open for some number n of cycles. The power delivered to the load(s) is then simply m/(m + n) times the power delivered if the switch(es) are kept permanently closed, for the single-phase, wye with neutral, and delta-connected regulators. For the wye-switched (without neutral) and British delta regulators, the power delivered is slightly greater than m/(m + n) times the power at full switch conduction, and dc and unbalanced ac components develop in the supply unless special control techniques are used. These phenomena arise because of the transient conditions, inevitably attending the first cycle of operation of these circuits. An undesirable consequence of integral-cycle control is that the load voltages and currents, and hence the supply currents, contain sideband components having frequencies fs[1 ± pm/(m + n)], where fs is the supply frequency and p is any integer, 1 to infinity. Many of these frequencies are obviously lower than the supply frequency and may create problems for the supply system and other connected loads. The existence of this type of unwanted components in the voltage and current spectra makes integral-cycle control unsuitable for inductive loads (including loads fed by transformers). Since none of the sidebands are zero sequence, the line currents of an integral-cycle-controlled delta regulator are identical to the properly transposed load currents. 13.69 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.70

AC REGULATORS 13.70

POWER ELECTRONICS

FIGURE 13.5.1 Single-phase ac switches.

FIGURE 13.5.2 Single-phase ac regulator.

Integral-cycle control results in unity displacement factor (the cosine of the angle between the fundamental component of supply current and the supply voltage). The power factor of the burden they impose on the supply with pure resistive loads is [m/(m + n)]0.5. This is true because any regulator that forces the load current to flow in the supply while reducing the rms voltage applied to a resistive load has a power factor equal to the rms load voltage divided by the rms supply voltage. The other method of control commonly used is termed phase-delay control. It is implemented by delaying the closing of the switch(es) by an angle a (called the firing angle) from each zero crossing of the supply voltage(s) and allowing the switch(es) to open again on each succeeding current zero. The load voltages and currents in this case contain only harmonics of the supply frequency as unwanted components, and except for the regulator shown in Fig. 13.5.3c, using thyristor-inverse diode switches, only odd-order harmonics are present. Thus, this control technique can be used with inductive loads.

FIGURE 13.5.3 Three-phase ac regulators.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.71

AC REGULATORS AC REGULATORS

13.71

The general expressions for the load voltages and currents produced by the single-phase regulator are very cumbersome but simplify considerably for the two cases of greatest practical importance, pure resistive and pure inductive loads. For a pure resistive load with a supply voltage V cos ws t, the fundamental component of load voltage is given by  α sin 2α  sin 2 α VDIR = 1 − + V sin ω s t  V cos ω s t + π 2π   π

(1)

and the total rms load voltage by VRMSR =

V  α sin 2α  1 − +  2π  2 π

1/ 2

(2)

where a is the firing angle measured from the supply-voltage zero crossings. For pure inductive load it is convenient to define the firing angle a′ = a − p/2, so that at full output a′ = 0. The fundamental voltage component is then given by  2α ′ sin 2 α ′  − VDIL = 1 −  V cos ω s t π π  

(3)

and the total rms voltage by VRMSL =

V  2α ′ sin 2α ′  − 1 −  π π  2

1/ 2

(4)

The same relationships apply to the three-phase circuits, which are in effect triplicates of the single-phase circuit (Fig. 13.5.3a and 13.5.3b); more complex relationships exist for the remaining three-phase circuits. The use of phase-delay control results in decreasing lagging displacement factor with increasing firing angle. Maximum displacement factor is obtained at full output, equaling the power factor of the given load. At a reduced power setting the power factor is less than the displacement factor; the ratio of the two equals the ratio of the fundamental line currents versus the total rms line currents. This ratio is less than unity because of the presence of harmonic currents. The load voltages and currents and, more importantly, the line currents of phase-delay-controlled regulators have lower total rms distortion than those of integral-cycle-controlled regulators. Among the circuits shown, the delta regulator of Fig. 13.5.3b is most beneficial; since the triple n harmonics (those of orders which are integer multiples of 3) in its load currents are zero sequence, they do not flow in the supply lines and the circuit has both a better power factor and lower total line-current distortion than integral-cycle regulators or the phase-delay-controlled wye regulators with neutral. For the wye regulator without neutral, the range of a is 0 to 7p/6 rad, provided fully bilateral switches are used; for the British delta regulator and the wye regulator without neutral using thyristor-inverse diode switches, the range is 0 to 5p/6 rad. When phase-delay regulators are used with inductive loads, the range of a used for control is reduced because current-zero crossings lag voltage-zero crossings and thus abrogate part of the delay obtained with resistive loads. The regulators most commonly used with inductive loads are the single-phase, wye with neutral and the delta, for which the range of a becomes f to p, where f is the load-phase angle.

STATIC VAR GENERATORS The delta regulator with purely inductive loading finds extensive use in the static VAR generator (SVG) (Gyugyi et al., 1978, 1980). A basic SVG consists of three delta-connected inductors with phase-controlled switches (p/2 ≤ a ≤ p) and three fixed capacitive branches which may be delta-or wye-connected. The

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.72

AC REGULATORS 13.72

POWER ELECTRONICS

capacitive branches draw a fixed current from the supply, leading the voltage by p/2 rad. The fundamental current in the inductors is lagging the voltage by p/2 rad. Its amplitude can be varied, by phase controlling the switches, from the full inductor current to zero. Hence the net reactive volt-ampere burden on the supply can be continuously controlled from the full capacitive VAR, when a = p and the inductor currents are zero, to the difference between the capacitive- and inductive-branch VARs when a = p /2 and full inductor currents flow. This difference will be zero if inductive-branch VARs are made equal to capacitive-branch VARs and become an inductive burden if inductive VARs at full conduction exceed the capacitive VARs. Since the firing angle a can be varied on a half-cycleto-half-cycle basis, extremely rapid changes in VAR supply (capacitive burden) or demand (inductive burden) can be accomplished. SVGs can be used to supply shunt-reactive compensation on ac transmission and distribution systems, helping system stability and voltage regulation. They can also be used to provide damping of the subsynchronous resonances, which often prove troublesome during transient disturbances on series capacitor-compensated transmission systems, and to reduce the voltage fluctuations (flicker) produced by arc-furnace loads. In the latter application, their ability to accomplish dynamic load balancing is especially valuable. An SVG which can provide control of reactive power supply or demand can obviously compensate for an unbalanced reactive load. It can also act as a Steinmetz balancer, providing the reactive power exchange between phases necessary to transform an unbalanced resistive load into a perfectly balanced and totally active (real) power load on the supply system. This action can be explained as follows. Suppose a single-phase resistive load is connected between lines A and B of a three-phase system. Then the current it draws will be in phase with the AB voltage, and thus the A-line current created will lead the A-phase (line-to-neutral) voltage by p/6 rad and the B-line current will lag the B-phase voltage by p /6 rad. If equal-impedance purely reactive loads are now connected to the BC and CA line pairs, capacitance on BC and inductive on CB, they create currents with the following phase relationships to the phase voltages: In the A line, lagging by 2p/3 rad In the B line, leading by 2p/3 rad In the C line, one leading by p/3 rad and the other lagging by p/3 rad The result in the C line is clearly an in-phase, wholly real current. If the impedances are of appropriate magnitude, their lagging and leading quadrature contributions in the A and B lines, respectively, can be made to cancel the lagging and leading quadrature currents created therein by the single-phase resistive load. The impedance required is √3 times the resistance. Obviously an SVG capable of providing either leading or lagging line-to-line loading on any of the line pairs can be used to balance a single-phase resistive load on any one line pair; by extension, it can be used to balance any unbalanced load. It can respond rapidly to changes in the degree of imbalance existing and thus dynamically balance the load despite the fluctuating imbalance typically created by an arc furnace. In addition to a varying reactive fundamental current, an SVG operating other than at full or zero conduction in its reactive branches generates harmonic currents. Thus at least part of the capacitive branch is usually realized in the form of tuned harmonic filters to limit harmonic injection to the ac supply system. Maximum harmonic amplitudes relative to maximum fundamental are: Harmonic order Maximum amplitude percent

3d 13.8

5th 5.05

7th 2.59

9th 1.57

11th 1.05

13th 0.752

with diminishing amplitudes of the higher-order components. When the SVG is in balanced operation, the triple n harmonics (3d and 9th in the table above) do not flow in the supply, being zero sequence. When operation is unbalanced in order to balance an unbalanced real load, positive and negative sequence components of the triple n harmonics develop and of course do flow in the supply unless filtering is provided for them.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_13.qxd

10/28/04

11:01 AM

Page 13.73

AC REGULATORS AC REGULATORS

13.73

REFERENCES Gyugyi, L., R. A. Otto, and T. H. Putnam, “Principles and applications of static, thyristor controlled shunt compensators,” IEEE Trans. Power Apparatus and Systems, Vol. PAS-97, No. 5, 1978. Gyugyi, L., and E. C. Strycula, “Active ac power filter,” IEEE Ind. Appl. Soc. Annual Meeting Rec., pp. 529–535, 1976. Gyugyi, L., and E. R. Taylor, “Characteristics of static, thyristor-controlled shunt compensators for power transmission system applications,” IEEE Trans. Power Eng. Soc., Vol. F 80, p. 236, 1980.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:37 AM

Page 14.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 14

PULSED CIRCUITS AND WAVEFORM GENERATION Pulsed circuits and waveform generation are very important to testing and identification in a whole range of electrical and electronic circuits and systems. There are essentially two types of such networks, those that are considered passive and the rest that can be lumped into active wave shaping (which includes those done digitally). Passive circuits are lumped into linear and nonlinear. Linear are most commonly, single pole RC and RL networks. Nonlinear networks are usually designed around diodes with or without capacitors and inductors. A common element used in waveform generation is the switch. Mechanical switches are cleaner giving better electrical characteristics; however, they do have serious limitations. Electronic switches can be compensated so that they can come close to the mechanical switches without the serious limitations such as contact bounce. In addition, electronic switches can be made smaller and are able to work at much higher frequencies. Active networks are either analog or digital. Analog networks have been in use for a long period of time and still have many practical uses. Digital networks have advantages especially in the area of noise, speed, and accuracy and have successfully replaced most of the analog networks in most applications. C.A.

In This Section: CHAPTER 14.1 PASSIVE WAVEFORM SHAPING LINEAR PASSIVE NETWORKS NONLINEAR-PASSIVE-NETWORK WAVESHAPING

14.5 14.5 14.12

CHAPTER 14.2 SWITCHES THE IDEAL SWITCH BIPOLAR-TRANSISTOR SWITCHES MOS SWITCHES TRANSISTOR SWITCHES OTHER THAN LOGIC GATES

14.15 14.15 14.15 14.19 14.22

CHAPTER 14.3 ACTIVE WAVEFORM SHAPING ACTIVE CIRCUITS RC OPERATIONAL AMPLIFIER-INTEGRATOR SWEEP GENERATORS SAMPLE-AND-HOLD CIRCUITS NONLINEAR NEGATIVE FEEDBACK WAVEFORM SHAPING POSITIVE FEEDBACK WAVEFORM SHAPING INTEGRATED-CIRCUIT FLIP-FLOPS SYNCHRONOUS BISTABLE CIRCUITS

14.25 14.25 14.25 14.25 14.29 14.29 14.30 14.33 14.34

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.2

PULSED CIRCUITS AND WAVEFORM GENERATION

INTEGRATED-CIRCUIT SCHMITT TRIGGERS INTEGRATED MONOSTABLE AND ASTABLE CIRCUITS

14.38 14.40

CHAPTER 14.4 DIGITAL AND ANALOG SYSTEMS INTEGRATED SYSTEMS COUNTERS SHIFT REGISTERS MULTIPLEXERS, DEMULTIPLEXERS, DECODERS, ROMS, AND PLAS MEMORIES DIGITAL-TO-ANALOG CONVERTERS (D/A OR DAC) ANALOG-TO-DIGITAL (A/D) CONVERTERS (ADC) DELTA-SIGMA CONVERTERS VIDEO A/D CONVERTERS FUNCTION GENERATORS

14.43 14.43 14.43 14.45 14.46 14.49 14.53 14.57 14.61 14.67 14.71

On the CD-ROM: Dynamic Behavior of Bipolar Switches

Section References: 1. Ebers, J. J., and J. L. Moll, “Large-signal behavior of junction transistors,” Proc. IRE, December 1954, Vol. 42, pp. 1761–1772. 2. Moll, J. L., “Large-signal transient response of junction transistors,” Proc. IRE, December 1954, Vol. 42, pp. 1773– 1784. 3. Glaser, L. A., and D. W. Dobberpuhl, “The Design and Analysis of VLSI Circuits,” Addison-Wesley, 1985. 4. Horowitz, P., and W. Hill, “The Art of Electronics,” Cambridge University Press, 1990. 5. Eccles, W. H., and F. W. Jordan, “A trigger relay utilizing three electrode thermionic vacuum tubes,” Radio Rev., 1919, Vol. 1, No. 3, pp. 143–146. 6. Schmitt, O. H. A., “Thermionic trigger,” J. Sci. Instrum., 1938, Vol. 15, p. 24. 7. Tietze, U., and C. Schenk, “Advanced Electronic Circuits,” Springer, 1978. 8. Masakazu, S., “CMOS Digital Circuit Technology, ATT,” Prentice Hall, 1988. 9. Stein, K. U., and H. Friedrich, “A 1-m/12 single-transistor cell in n-silicon gate technology,” IEEE J. Solid-State Circuits, 1973, No. 8, pp. 319–323. 10. Jespers, P. G. A., “Integrated Converters, D. to A. and A. to D. Architecture, Analysis and Simulation,” Oxford University Press, 2001. 11. Van den Plassche, R. J., “Dynamic element matching for high-accuracy monolithic D/A converters,” IEEE J. Solid-State Circuits, December 1976, Vol. SC-11, No. 6, pp. 795–800; Van den Plassche, R. J., and D. Goedhart, “A monolithic 14 bit D/A converter,” IEEE J. Solid-State Circuits, June 1979, Vol. SC-14, No. 3, pp. 552–556. 12. Schoeff, J. A., “An inherently monotonic 14 bit DAC,” IEEE J. Solid-State Circuits, December 1979, Vol. SC-14, pp. 904–911. 13. Tuthill, M. A, “16 Bit monolithic CMOS D/A converter,” ESSCIRC, Digest of Papers, 1980, pp. 352–353. 14. Caves, J., C. H. Chen, S. D. Rosenbaum, L. P. Sellars, and J. B. Terry, “A PCM voice codec with on-chip filters,” IEEE J. Solid-State Circuits, February 1979, Vol. SC-14, pp. 65–73. 15. McCreavy, J. L., and P. R. Gray, “All-MOS charge redistribution analog-to-digital conversion techniques—Part I,” IEEE J. Solid-State Circuits, December 1975, Vol. SC-10, No. 6, pp. 371–379. 16. Chao, K. C.-H., S. Nadeem, W. Lee, and C. Sodini, “A higher order topology for interpolative moderators for oversampling A / D converters,” IEEE Trans. Circuits Syst., March 1990, Vol. 37, No. 3, pp. 309–318. 17. Peterson, J. G., “A monolithic, fully parallel, 8 bit A/D converter,” ISSCC Digest, 1979, pp. 128–129. 18. Song, B. S., S. H. Lee, and M. F. Tompsett, “A 10 bit 15 MHz CMOS recycling two-step A-D convertor,” IEEE JSSC, December 1990, Vol. 25, No. 6., pp. 1328–1338.

14.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.3

PULSED CIRCUITS AND WAVEFORM GENERATION

19. Wegmann, G., E. A. Vittoz, and F. Rahali, “Charge injection in MOS switches,” IEEE JSSC, December 1987, Vol. SC-22, No. 6. pp. 1091–1097. 20. Norsworthy, S. R., R. Schreier, and G. C. Temes, “Delta–Sigma Data Converters, Theory, Design and Simulation,” IEEE Press, 1997.

Section Bibliography: Wegmann, G., E. A. Vittoz, and F. Rahali, “Charge injection in MOS switches.” IEEE JSSC, Vol. SC-22, No. 6. December 1987.

14.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.4

PULSED CIRCUITS AND WAVEFORM GENERATION

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.5

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 14.1

PASSIVE WAVEFORM SHAPING Paul G. A. Jespers

LINEAR PASSIVE NETWORKS Waveform generation is customarily performed in active nonlinear circuits. Since passive networks, linear as well as nonlinear, enter into the design of pulse-forming circuits, this survey starts with the study of the transient behavior of passive circuits. Among linear passive networks, the single-pole RC and RL networks are the most widely used. Their transient behavior in fact has a broad field of applications since the responses of many complex higher-order networks are dominated by a single pole; i.e., their response to a step function is very similar to that of a first-order system. Transient Analysis of the RC Integrator The step-function response of the RC circuit shown in Fig. 14.1.1a, after closing of the switch S, is given by V(t) = E[1 – exp (–t/T)]

(1)

where T = time constant = RC. The inverse of T is called the cutoff pulsation w0 of the circuit. The Taylor-series expansion of Eq. (1) yields V (t ) = E

t T

  t t2 1 − +  2!T 3!T 2 −   

(2)

When the values of t are small compared with T, a first-order approximation of Eq. (2) is V(t) ≈ Et/T

(3)

In other words, the RC circuit of Fig. 14.1.1 behaves like an imperfect integrator. The relative error  with respect to the true integral response is given by

=−

t t2 t2 + − + 2 2!T 3!T 4!T 3

The theoretical step-function response of Eq. (1) and the ideal-integrator output of Eq. (3) are represented in Fig. 14.1.1b. Small values of t with respect to T correspond in the frequency domain (Fig. 14.1.1c) to frequency components situated above w0, that is, the transient signal whose spectrum lies to the right of w0 in the figure. In that case, the difference is small between the response curve of the RC filter and that of an ideal integrator 14.5 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.6

PASSIVE WAVEFORM SHAPING 14.6

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.1.1 (a) RC integrator circuit; (b) voltage vs. time across capacitor; (c) attenuation vs. angular frequency.

FIGURE 14.1.2 (a) RC differentiator circuit; (b) voltage across resistor vs. time; (c) attenuation vs. angular frequency.

(represented by the –6 dB/octave line in the figure). The circuit shown in Fig. 14.1.1a thus approximates an integrator, provided either of the following conditions is satisfied: (1) the time under consideration is much smaller than T or (2) the spectrum of the signal lies almost entirely above w0. Transient Analysis of the RC Differentiator When the resistor and the capacitor of the integrator are interchanged, the circuit (Fig. 14.1.2a) is able to differentiate signals. The step-function response (Fig. 14.1.2b) of the RC differentiator is given by v(t) = E exp (–t/T)

(4)

The time constant T is equal to the product RC, and its inverse w0 represents the cutoff of the frequency response of the circuit. As the values of t become large compared with T, the step-function response becomes more like a sharp spike; i.e., it increasingly resembles the delta function. The response differs from the ideal delta function, however, because both its amplitude and its duration are always finite quantities. The area under the exponential pulse, equal to ET, is the important quantity in applications where such a signal is generated to simulate a delta function, as in the measurement of the impulse response of a system. These considerations may be transported in the frequency domain (Fig. 14.1.2a).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.7

PASSIVE WAVEFORM SHAPING PASSIVE WAVEFORM SHAPING

FIGURE 14.1.3 RL current-integrator circuit, the dual of the circuit in Fig. 14.1.1a.

FIGURE 14.1.5 RL voltage integrator.

14.7

FIGURE 14.1.4 RL current-differentiator circuit, the dual of the circuit in Fig. 14.1.2a.

FIGURE 14.1.6 RL voltage differentiator.

Transient Analysis of RL Networks Circuits involving a resistor and an inductor are also often used in pulse formation. Since integration and differentiation are related to the functional properties of first-order systems rather than to the topology of actual circuits, RL networks may perform the same function as RC networks. The duals of the circuits represented in Figs. 14.1.1 and 14.1.2, respectively, are shown in Figs. 14.1.3 and 14.1.4 and exhibit identical functional properties. In the first case, the current in the inductor increases exponentially from zero to I with a time constant equal to L/R, while in the second case it drops exponentially from the initial value I to zero, with the same time constant. Similar behavior can be obtained regarding voltage instead of current by changing the circuit from Fig. 14.1.3 to that of Fig. 14.1.5 and from Fig. 14.1.4 to Fig. 14.1.6, respectively. This duality applies also to the RC case. Compensated Attenuator The compensated attenuator is a widely used network, e.g., as an attenuator probe used in conjunction with oscilloscopes. The compensated attenuator (Fig. 14.1.7) is designed to perform the following functions: 1. To provide remote sensing with a very high input impedance, thus producing a minimum perturbation to the circuit under test. 2. To deliver a signal to the receiving end (usually the input of a wide-band oscilloscope) which is an accurate replica of the signal at the input of the attenuator probe. These conditions can be met only by introducing substantial attenuation to the signal being measured, but this is a minor drawback since adequate gain to compensate the loss is usually available. Diagrams of two types of oscilloscope attenuator probes are given in Fig. 14.1.9, similar to the circuit of Fig. 14.1.7. In both cases, the coaxial-cable parallels the input capacitance of the receiver end; Cp represents the sum of both capacitances. The shunt resistor Rp has a high value, usually 1 MΩ, while the series resistor Rs is typically 9 MΩ. The dc attenuation ratio of the attenuator probe therefore is 1:10, while the input impedance of the probe is 10 times that of the receiver. At high frequencies the parallel and series capacitors Cp and Cs play the same role as the resistive attenuator. Ideally these capacitors should be kept as low as possible to achieve a high input impedance even at high frequencies. Since it is impossible to reduce Cp below the capaciFIGURE 14.1.7 Compensated attenuator circuit. tance of the coaxial cable, there is no alternative other than

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.8

PASSIVE WAVEFORM SHAPING 14.8

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.1.8 Voltage vs. time responses of attenuator, showing correctly compensated condition at K = 1.

to insert the appropriate value of Cs to achieve a constant attenuation ratio over the required frequency band. In consequence, as the frequency increases, the nature of the attenuator changes from resistive to capacitive. However, the attenuation ratio remains unaffected, and no signal distortion is produced. The condition that ensures constant attenuation ratio is given by RpCp = RsCs

(5)

The step-function response of the compensated attenuator, which is illustrated in Fig. 14.1.8, clearly shows how distortion occurs when the above condition is not met. The output voltage V(t) of the attenuator is given by V (t ) =

Cs Cs + C p

  1 − (1 − K ) 1 − exp  

 t     − T    E  

(6)

where K represents the ratio of the resistive attenuation factor to that of the capacitive attenuation factor K=

Rp Rp + Rs

Cs C p + Cs

and T = (Rp || Rs) (Cs + Cp)

(7)

The || sign stands for the parallel combination of two elements, e.g., in the present case Rp || Rs = RpRs/(Rp + Rs). Only when K is equal to 1, in other words when Eq. (5) is satisfied, will no distortion occur, as shown in Fig. 14.1.8. In all other cases there is a difference between the initial amplitude of the step-function response (which is controlled by the attenuation ratio of the capacitive divider) and the steady-state response (which depends on the resistive divider only). A simple adjustment to compensate the attenuator consists of trimming one capacitor, either Cp and Cs, to obtain the proper step-function response. Adjustments of this kind are provided in attenuators like those shown in Fig. 14.1.9. Compensated attenuators may be placed in cascade to achieve variable levels of attenuation. The conditions imposed on each cell are like those enumerated above, but an additional requirement is introduced, namely, the requirement for constant input impedance. This introduces a different structure compared with the compensated attenuator, as shown in Fig. 14.1.10. The resistances Rp and Rs must be chosen so that the impedance is kept constant and equal to R. The capacitor Cs is adjusted to compensate the attenuator, while Cp provides the required additional capacitance to make the input susceptance equal to that of the load.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.9

PASSIVE WAVEFORM SHAPING PASSIVE WAVEFORM SHAPING

14.9

FIGURE 14.1.9 Coaxial-cable type of attenuator circuit: (a) series adjustment; (b) shunt adjustment.

Periodic Input Signals Repetitive transients are typical input signals to the majority of pulsed circuits. In linear networks there is no difficulty in predicting the response of circuits to a succession of periodic step functions, alternatively positive and negative, since the principle of superposition holds. We restrict our attention here to two simple cases, the squarewave response of an RC integrator and an RC differentiator. Figure 14.1.11 represents, at the left, the buildup of the response of the RC integrator, assuming that the period t of the input square wave is smaller than the time constant of the circuit T. On the right in the figure the steady-state response is shown. The triangular waveshape represents a fair approximation to the integral of the input square wave. The triangular wave is superimposed on a dc pedestal of amplitude E/2. Higher repetition rates of the input reduce the amplitude of the triangular wave without affecting the dc pedestal. FIGURE 14.1.10 Compensated attenuator suitable for When the frequency of the input square wave is high use in cascaded circuits. enough, the dc component is the only remaining signal; i.e., the RC integrator then acts like an ideal low-pass filter. A similar presentation of the behavior of the RC differentiator is shown in Fig. 14.1.12a and b. The steadystate output in this case is symmetrical with respect to the zero axis because no dc component can flow through the series capacitor. When, as shown in Fig.14.1.12b, no overlapping of the pulses occurs, the steady-state solution is obtained from the first step. Pulse Generators The step function and the delta function (Dirac function) are widely used to determine the dynamic behavior of physical systems. Theoretically the delta function is a pulse of infinite amplitude and infinitesimal duration but having a finite area (product of amplitude and time). In practice the question of the equivalent physical impulse arises. The answer involves the system under consideration as well as the impulse itself.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.10

PASSIVE WAVEFORM SHAPING 14.10

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.1.11 RC integrator with square-wave input of period smaller than RC: (a) initial buildup; (b) steady state.

FIGURE 14.1.12 RC differentiator with square-wave input: (a) period of input signal smaller than RC; (b) input period longer than RC.

FIGURE 14.1.13 RC pulse-generator circuit with large series resistance R1.

FIGURE 14.1.14 Coaxial-cable version of RC pulse generator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.11

PASSIVE WAVEFORM SHAPING PASSIVE WAVEFORM SHAPING

14.11

FIGURE 14.1.15 Use of mercury-wetted switch contacts in coaxial pulse generator.

The spectrum of the delta function has a constant amplitude over the whole frequency spectrum from zero to infinity. Other signals of finite area (amplitude × time) have different spectral distributions. On a logarithmic scale of frequency, the spectrum of any finite-area transient signal tends to be constant between zero and a cutoff frequency that depends on the shape of the signal. The shorter the duration of the signal, the wider the constant-amplitude portion of the spectrum. If such a signal is used in a system whose useful frequency band is located below the cutoff frequency of the signal spectrum, the system response is indistinguishable from its delta impulse response. Any transient signal with a finite area, whatever its shape, can thus be considered as a delta function relative to the given system, provided that the flat portion of its spectrum embraces the whole system’s useful frequency range. A measure of the effectiveness of a pulse to serve as a delta function is given by the approximation of useful spectrum bandwidth B = 1/t, where t represents the midheight duration of the pulse. Very short pulses are used in various applications in order to measure their delta-function response. In the field of radio interference, for instance, the basic response curve of the CISPR receiver* is defined in terms of its response to regularly repeated pulses. In this case, the amplitude of the uniform portion of the pulse spectrum must be calibrated, i.e., the area under the pulse must be a known constant which is a function of a limited number of circuit parameters. The step-function response of an RC differentiator provides such a convenient signal. Its area is given by the amplitude of the input step multiplied by the time constant RC of the circuit. Moreover, since the signal is exponential in shape, its –3-dB spectrum bandwidth is equal to 1/RC. In the circuit of Fig. 14.1.13, R1 is much larger than R; when the switch S is open, the capacitor charges to the voltage E of the dc source. When the switch is closed, the capacitor discharges through R, producing an exponential signal of known amplitude and duration (known area). A circuit based on the same principle is shown in Fig. 14.1.14. Here the coaxial line plays the role of energy storage source. If the line is lossless, its characteristic impedance is given by R0, the propagation delay is equal to t, and the Laplace transform of the voltage drop across R is V(p) = (1/p)E[1 + (R0/R) coth pt]–1

(8)

When the line is matched to the load, Eq. (8) reduces to V( p) = (1/2p)E(1 – e–p2t )

(9)

which indicates that V(t) is a square wave of amplitude E/2 and duration 2t. The area of the pulse is equal to the product of E and the time constant t. Both quantities can be kept reasonably constant. The bandwidth is larger than that of an exponential pulse of the same area (Fig. 14.1.13) by the factor p. Very wide bandwidth pulse generators based on this principles use a coaxial mercury-wetted switch built into the line (Fig. 14.1.15) to achieve low standing-wave ratios. A bandwidth of several GHz can be obtained in this manner. In coaxial circuits, any impedance mismatch causes reflections to occur at both ends of the line, replacing the desired square-wave signal by a succession of steps of decreasing amplitude. The cutoff frequency of the *International Electrotechnical Commission (IEC), “Specification, de l’apparelloge de mesure CISPR pour les frequences comprises entre 25 et 300 MHz,” 1961.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.12

PASSIVE WAVEFORM SHAPING 14.12

PULSED CIRCUITS AND WAVEFORM GENERATION

spectrum is lowered thereby, and its shape above the uniform part can be drastically changed. Below cutoff frequency, however, the spectrum amplitude is given by EtR/R0. When the finite closing time of the switch is taken into account, it can be shown that only the width of the spectrum is reduced without affecting its value below the cutoff frequency. Stable calibrated pulse generators can also be built using electronic instead of mechanical switches.

NONLINEAR-PASSIVE-NETWORK WAVESHAPING Nonlinear passive networks offer wider possibilities for waveshaping than linear networks, especially when energy-storage elements such as capacitors or inductors are used with nonlinear devices. Since the analysis of the behavior of such circuits is difficult, we first consider purely resistive nonlinear circuits. Diode Networks without Storage Elements Diodes provide a simple means for clamping a voltage to a constant value. Both forward conduction and avalanche (zener) breakdown are well suited for this purpose. Avalanche breakdown usually offers sharper nonlinearity than forward biasing, but consumes more power. Clamping action can be obtained in many different ways. The distinction between series and parallel clamping is shown in Fig. 14.1.16. Clamping occurs in the first case when the diode conducts; in the second when it is blocked. Since the diode is not an ideal device, it is useful to introduce an equivalent network that takes into account some of its imperfections. The complexity of the equivalent network is a trade-off between accuracy and ease of manipulation. The physical diode is characterized by I = Is[exp (V/VT) – 1] where IS is the leakage current and VT = kT/q, typically 26 mV at room temperature.

FIGURE 14.1.16 Diode clamping circuit and voltage vs. time responses: (a) shunt diode; (b) series diode.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(10)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.13

PASSIVE WAVEFORM SHAPING PASSIVE WAVEFORM SHAPING

FIGURE 14.1.17 Actual and approximate currentvoltage characteristics of ideal and real diodes.

14.13

FIGURE 14.1.18 (a) DC restorer circuit; (b) input signal; (c) output signal.

The leakage current is usually quite small, typically 100 pA or less. Therefore, V must be at least several hundred millivolts, typically 600 mV or more, to attain values of forward current I in the range of milliamperes. A first approximation of the forward-biased real diode consists therefore of a series combination of the ideal diode and a small emf (Fig. 14.1.17). Moreover, to take into account the finite slope of the forward characteristic, a better approximation is obtained by inserting a small resistance in series. Diode Networks with Storage Elements There is no simple theory to represent the behavior of nonlinear circuits with storage elements, such as capacitors or inductances. Acceptable solutions can be found, however, by breaking the analysis of the circuit under investigation into a series of linear problems. A typical example is the dc restorer circuit hereafter. The circuit shown in Fig. 14.1.18 resembles the RC differentiator but exhibits properties that differ substantially from those examined previously. The diode D is assumed to be ideal first to simplify the analysis of the circuit, which is carried out in two steps, i.e., with the diode forward- and reverse-biased. In the first step, the output of the circuit is short-circuited; in the second, the diode has no effect, and the circuit is identical to the linear RC differentiator. When a series of alternatively positive and negative steps is applied at the input, after the first positive step is applied, no output voltage is obtained. The first positive step causes a large transient current to flow through the diode and charges the capacitor. Since D is assumed to be an ideal short circuit, the current will be close to a delta function as long as the internal impedance of the generator connected at the input is zero. In practice, the finite series resistance of the diode must be added to the generator internal impedance, but this does not affect the load time constant significantly, since it is assumed to be much smaller than the time between the first positive step and the following negative step. This allows the circuit to attain the steady-state conditions between steps. When the input voltage suddenly returns to zero, the output voltage undergoes a large negative swing whose magnitude is equal to that of the input step. The diode is then blocked, and the capacitor discharges slowly through the resistor. If the time constant is assumed to be much larger than the period of the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.14

PASSIVE WAVEFORM SHAPING 14.14

PULSED CIRCUITS AND WAVEFORM GENERATION

input wave, the output voltage swings back to zero when the second positive voltage step is applied and only a small current flows through the forward-biased diode to restore the charge lost when the diode was under the reverse-bias condition. If the finite resistance of the diode is taken into consideration, a series of short positive exponential pulses must be added to the output signal, as shown in the lower part of Fig. 14.1.18. The first pulse, which corresponds to the initial full charge on the capacitor, is substantially higher than the next pulse, but this is of little importance in the operation of the circuit. An interesting feature of the dc restorer circuit lies in the fact that although no dc component can flow from input to output, the output signal has a well-defined nonzero dc level, although determined only by the amplitude of the negative steps (assuming, of course, that the lost charge between two steps is negligible). This circuit is used extensively in video systems to prevent the average brightness level of the image from being affected by its varying video content. In this case, the reference steps are the line-synchronizing pulses.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.15

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 14.2

SWITCHES Paul G. A. Jespers

THE IDEAL SWITCH An ideal switch is a two-pole device that satisfies the following conditions: Closed-switch condition. The voltage drop across the switch is zero whatever the current flowing through the switch may be. Open-switch condition. The current through the switch is zero whatever the voltage across the switch may be. Mechanical switches are usually electrically ideal, but they suffer from other drawbacks; e.g., their switching rate is low, and they exhibit jitter. Moreover, bouncing of the contacts may be experienced after closing, unless mercury-wetted contacts are used. Electronic switches do not exhibit these effects, but they are less ideal in their electrical characteristics.

BIPOLAR-TRANSISTOR SWITCHES The bipolar transistor approximates an open switch between emitter and collector when its base terminal is open or when both junctions are reverse-biased or even only slightly forward-biased. Inversely, under saturated conditions, the transistor resembles a closed switch with a small voltage drop in series, typically 50 to 200 mV. This drop may be considered negligible in many applications. Static Characteristics A more rigorous approach to the transistor static characteristics is based on the Ebers and Moll transport equations   1 IE  1   exp (VE /VT ) − 1 − β − 1      F    = IS  1   exp (V /V ) − 1 I  1 1 − −   C C T    β R   

(1)

where VE, VC = voltage drops across emitter and collector junctions, respectively (positive voltages stand for forward bias, negative for reverse bias); VT = kT/q (typically 26 mV at room temperature); IS = saturation current; and bF, bR represent forward (IC /IB) and reverse (IE /IB) current gains, respectively, with VE > 0, VC < 0 in the first case and VE < 0, VC > 0 in the second.

14.15 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.16

SWITCHES 14.16

PULSED CIRCUITS AND WAVEFORM GENERATION

The saturation current IS governs the leakage current flowing through the transistor under blocked conditions. It is always exceedingly small, and since it usually amounts to 10–14 or 10–15 A, it is difficult to measure. A standard procedure is to draw the plot representing the collector current in log scale versus the emitter forward bias VE in linear scale. To find the saturation current, one must extrapolate the part of the curve, which can be assimilated to a straight line with a slope of 60 mV/decade to the intercept with the vertical axis for which VE = 0. The saturation current can also be obtained with emitter and collector terminals permutated. The current gains bF and bR can be evaluated by means of the same experimental setup. An additional ammeter is required to measure IB. It is common practice to rewrite Eq. (1) so that the emitter and collector currents are expressed as functions of IF = IS[exp (VE /VT) – 1]

(2)

IR = IS[exp (VC /VT) – 1]

(3)

and

With these definitions Eq. (1) can be expressed as IC = IF – IR – IR /bR

(4)

IE = –IF – (IF /bF) + IR

(5)

Hence, the Ebers and Moll transport model is found. This is illustrated by the equivalent circuit of Fig. 14.2.1. The leakage currents of the two diodes D1 and D2 are given respectively by IS /bF and IS /bR. With this model, it is possible to compute the currents flowing through the transistor under any circumstance. For instance, if the collector junction is reverse-biased and a small positive bias of, for example, +100 mV is established across the emitter-junction, the reverse current IR is almost equal to – IS and IF is equal to IS exp (100/26) or 46.8 IS. Hence, from Eqs. (4) and (5), the collector current is found to be equal to 48IS, and the emitter current is approximately the same with opposite sign. With the assumption that IS is equal to 10 pA, both IC and IE are essentially negligible. A fortiori, IB as derived FIGURE 14.2.1 Equivalent circuit of the Ebers from Eqs. (4) and (5) is also small: and Moll transport model of the bipolar transistor.

IB = (IF /bF) + (IR /bR)

(6)

To drive current through the transistor, the voltage across one of the two junctions must reach at least 0.5 V, according to: VE = VT ln (IF /IS)

(7)

VC = VT ln (IR /IS)

(8)

or

derived from Eqs. (2) and (3). The transistor operates in the saturation region when it approximates a closed switch. The voltage drop between the emitter and collector terminals is then given by VCE ,sat = VT ln

n + (n + βF )/βR n −1

(9)

where n represents the ratio bF IB /IC, assumed larger than 1. For most transistors, this voltage drop lies between 50 and 200 mV. The inevitable resistance in series with the collector increases this voltage by a few tens of millivolts.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.17

SWITCHES SWITCHES

14.17

An interesting situation arises when IC is almost equal to zero; e.g., when the bipolar transistor is used to set the potential across a capacitor. In this case, Eq. (9) becomes VCE,sat = VT ln (1 + 1/bR)

(10)

Similarly, with interchanged emitter and collector terminals, the voltage drop is given by VEC,sat = VT ln (1 + 1/bF)

(11)

Since bF is normally much larger than bR, VEC,sat may be smaller than 1 mV provided bF is at least equal to 25. Consequently, inverted bipolar transistors are switches with a very small series voltage drop, provided that the current flowing through the transistor is kept small. The two situations examined so far (open or closed switch) correspond in Fig. 14.2.2, respectively, to IB = 0 and to the part of the curves closely parallel to the collector-current axis. The fact that all the curves have nearly the same vertical shape means that the series resistance of the saturated transistor is quite small. Since the characteristics do not coincide with the vertical coordinate axis, a small series emf must be considered, however, as previously stated. A third region exists where the transistor plays the FIGURE 14.2.2 Typical common-emitter characteristics role of a current switch instead of a voltage switch. It of the bipolar transistor. VA is called the Early voltage. concerns the switching from blocked conditions to any point within the active region or vice versa. Conceptually, the transistor may be compared to a controlled current source, which is switched on or off. However, because of the Early effect, the current is a function of the collector to emitter voltage. The Ebers and Moll model is inappropriate to describe this effect. A better expression of IC is IC = IS exp (VE /VT)(1 + VCE /VA)

(12)

where VA is called the Early voltage. Equation (12) is illustrated in Fig. 14.2.2. The finite output conductance of the transistor is given by IC /VA.

Dynamic Characteristics The dynamic behavior of bipolar transistors suffers from a drawback called “charge storage in the base,’’ which takes place every time transistors are driven in or out of saturation. The phenomenon is related to the majority carriers supplementing the minority carrier in the base to guarantee neutrality. The question is how to remove these extra carriers when the transistor is supposed to desaturate. Zeroing the base current is not a satisfactory solution for the majority carriers can only recombine with minority carriers. This requires a lot of time for lifetimes in the base are generally large in order to maximize the current gain. A better technique is to reverse the polarity of the base current. The larger this current, the more rapidly the majority carriers disappear and the faster the transistor gets out of saturation. Current continues to flow, however, until one of the two junctions gets reverse biased (usually the collector junction). Only then the transistor enters the active region and the collector current may start to decrease. When all majority carriers are swept away, both junctions are reverse-biased and the impedance of the base contact unfolds from a low impedance to an open circuit. A quantitative analysis of the desaturation mechanism can be found in the CD-ROM together with an example. Charge storage is generally associated with the base of bipolar transistors, although the same phenomenon takes place near the depleted region in the emitter neutral region also. The reason why this is not considered is related to the doping profile in the emitter comparatively to the base region. The emitter doping is much larger

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.18

SWITCHES 14.18

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.2.3 A Schottky diode D prevents T from going into saturation.

FIGURE 14.2.4 Planar npn transistor and Schottky diode in integrated-circuit form.

than the base doping because it increases the emitter’s efficiency, which controls the current gain (the emitter efficiency is the ratio of emitter-base current over base-emitter current). Junction diodes suffer from the same drawback. The diffusion length in the less-doped region being much longer than the base width of a bipolar transistor, charge storage problems should be worse. This is not the case, however, as diodes are generally bipolar transistors with their base and collector terminals shorted, making their charge storage effects similar to those of bipolar transistors. In order to reduce the delay between the control signal applied to the base and the moment the collector current begins to decay, several techniques have been developed. Two of these are reviewed hereafter. The first takes advantage of Schottky diodes, which exploit field emission and, therefore, ignore charge storage phenomena. Shottky diodes exhibit a slightly smaller voltage drop under forward bias than junction diodes (of the order of 0.4 V instead of 0.7 V), which is currently exploited to prevent bipolar transistors from getting saturated. The idea is illustrated in Fig. 14.2.3, which represents a bipolar transistor whose collector junction is paralleled by a Schottky diode. When the transistor is nearing saturation, the Schottky diode starts conducting before the collector junction does. This prevents the transistor from entering saturation. The base current in excess to what is needed to sustain the actual collector current flows directly to ground through the series combination of the forward-biased Schottky diode and emitter junction. Figure 14.2.4 shows how the combination of a Shottky diode and a bipolar transistor can be implemented. The Schottky diode consists of the metal contact that overlaps the lightly doped collector, whereas in the base region the metal to the P-type semiconductor resumes to an Ohmic contact. Such combination is currently used in Schottky logic, a family of fast bipolar logic circuits. The second way to avoid charge storage is to devise circuits that never operate in saturation. The switch shown in Fig. 14.2.5, which involves two transistors controlled by a pair of square waves with opposite polarities is a good example. When Q1 is on, Q2 is off or vice versa. Current is switched either left or right. Although the circuit looks like a differential pair, it operates in a quite different manner: the conducting transistor is in the common base configuration, while the other is blocked. Since the output current is taken from the collector of the common base transistor, the output impedance is very large. This means that the switch is a current-mode

FIGURE 14.2.5 A bipolar current switch.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.19

SWITCHES SWITCHES

14.19

switch instead of a voltage-mode switch. Very short switching times are feasible this way for none of the two transistors ever saturates. Emitter-coupled logic (ECL) takes advantage of this circuit configuration.

MOS SWITCHES Insulated gate field-effects (IGFETs, also called MOS transistors) and junction field-effect transistors (JFETs) can be put to use in order to mimic switches. They resemble an ideal closed switch in series with a linear resistor when “on” and an open switch when “off.” The leakage current, however, is larger than with bipolar transistors. Static Characteristics When a field-effect transistor is turned on, its characteristics differ substantially from those of a bipolar switch. Since no residual emf in series with the switch is experienced, the transistor is comparable to a resistor whose conductance G is given by G = µCox

W (V − VT 0 − λV ) L G

(13)

where m is the mobility in the inversion layer, Cox the gate oxide capacitance per unit area, VG the gate-tosubstrate voltage, VT0 the gate threshold voltage under zero bias, V the source or drain voltage, and l a dimensionless factor that does take into account the so-called substrate effect (the value of l lies somewhere between 1.3 and 1.5). The dependance on source and drain voltages of the conductance G represents a problem that impairs severely the performances of MOS switches. Consider, for instance, a MOS switch in series with a grounded capacitor to implement a simple sample-and-hold circuit. Since the MOS transistor is similar to a resistor when its gate voltage is high, the circuit may be assimilated to an RC network with a time constant that varies like the reciprocal of the difference in the right part of Eq. (13). Hence, when the input voltage V equals (VG – VT0)/l the conductance becomes equal to zero and the switch resumes to an open circuit. In practice, V must be well below this limit to sample the input within a small enough time window. Single MOS switches are not favored therefore. For instance, in logic circuits, where the logic 1s and 0s are set by the power supply and ground, respectively, the logic high signal may be substantially corrupted and the speed reduced. CMOS switches are preferred threfore to single MOS switches. A typical CMOS transmission switch is shown in Fig. 14.2.6. It consists of the parallel combination of an

FIGURE 14.2.6 The complementary MOS switch.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.20

SWITCHES 14.20

PULSED CIRCUITS AND WAVEFORM GENERATION

N-MOS and a P-MOS transistor controlled by complementary logic signals. The idea is simply to counterbalance the decreasing conductance of the N-MOS transistor when the input goes from low to high by the increasing conductance of the P-MOS transistor. Thanks to the parallel combination, the series resistance is kept large and almost unchanged for any input voltage. The same holds true as long as the time is constant. Dynamic Characteristics MOS transistors ignore charge storage phenomena for they are unipolar devices. Their transient behavior is controlled by the parasitic capacitances associated with their gate, source, and drain. Source and drain are reverse-biased junctions, which exhibit parasitic capacitances with respect to the substrate. The gate capacitance relates to the inversion layer and to the regions overlaping source and drain. These capacitances control the dynamic behavior of the switch in conjunction with the parasitics associated to the elements connected to the MOS transistor terminals. What happens with the inversion layer charge when the transistor is switched off is considered hereafter. Since charge cannot simply vanish, it must go somewhere, either to the source or drain or to both. This introduces generally a short spike in memoryless circuits that does not affect the performances significantly except at high frequency. In circuits that exhibit memory, like in the MOS sampling network discussed earlier, the impact is more serious. The part of the inversion layer charge left on the capacitive terminal is “integrated,” which leads to a DC offset. The charge partition problem in memory circuits is illustrated by the simple circuit shown in Fig 14.2.7, which consists of a MOS switch between two capacitors C1 and C2. We start from the situation where the voltages V1 and V2 across the capacitors are equal and no current is flowing through the transistor, supposed to be “on.” As soon as the gate voltage starts to decrease, the charge in the inversion layer tends to divide equally between MOS terminals for these are at the same potential. If the capacitors are not identical, a voltage difference starts to build up as soon as charge is being transferred. This causes current to flow in the MOS transistor,

FIGURE 14.2.7 The inversion layer charge divides between C1 and C2. after cutoff.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.21

SWITCHES SWITCHES

14.21

FIGURE 14.2.8 Fraction of the inversion layer charge left in C2 after cutoff vs. the parameter B defined under Eq. (14).

which tend to reduce the voltage difference between the capacitors. This re-equilibration mechanism holds on as long as the gate voltage exceeds the effective threshold voltage although it is getting weaker and weaker as the transistor is nearing cut-off. When finally the MOS transistor is cut-off, a nonzero voltage difference is left over, which may be assimilated to an offset. The size of this offset is a function of several factors including the gate slewing-rate. It is obvious that an abrupt gate voltage step, which does not leave time for re-equilibration, will split the inversion layer charge equally between the two capacitors, whereas slow cut-off will tend to keep the voltages across the capacitors more alike. The problem is addressed in Ref. 19. The fraction of the total inversion layer charge that is stored in capacitor C2 versus the parameter B defined below is illustrated in Fig. 14.2.8: B = (VGon − VT )i

β aC2

(14)

VGon is the gate voltage prior to switching, VT the effective threshold voltage of the MOS transistor equal to (VT 0 + lVin), b the well-known factor mCoxW/L and a the gate voltage slewing rate defined as (VGon – VT0 ) divided by the fall time. Notice that fast switching yields small values of B, whereas long switching times lead to large values of B. When B is small the inversion layer charge divides equally. Voltage equalization tends to prevail when B is large, as can be found from the large differences experienced once the ratio C2 /C1 departs from one. Let us consider, for instance, a MOS transistor with a b equal to 10–4 A/V2, a gate capacitance CG of 0.1 pF, VGon and VT 0, respectively, equal to 5 and 0.7 V, a large capacitor C1 to mimic a voltage generator and a load capacitance C2 equal to 1 pF. For fall times between 1 ps and 1 ns, the factor B varies from 0.021 until 0.626. The offset voltage is large and varies little from 215 to only 200 mV since reequilibration cannot take place in such a short time. A 10 ns fall time reduces the final offset to 125 mV, and 100 ns, 1 ms, and 10 ms fall times yield, respectively, 41, 13, and 4 mV offset. In any case, these are still large offsets in terms of analog signals. To get smaller offsets the switching times must be very long. Hence, switching noise cannot be avoided as such. A second important problem is nonlinear distortion. Less time is needed to block the MOS transistor when the input voltage is close to VGon for the effective threshold voltage becomes quite large. The amount of charge stored in C2 varies with the magnitude of the input signal and the offset is thus a nonlinear replica of the input.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.22

SWITCHES 14.22

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.2.9 Switching noise nonlinearity can be lessened by means of the transistor S2 in series with the storage capacitor.

This makes nonlinear distortion figures less than –70 dB hard to achieve unless a technique such as the one described hereafter is put to use. In the circuit illustrated in Fig 14.2.9, the lower end of capacitor C2 is tied to the ground by means of a second switch S2. During the acquisition time, both S2 and S1 are conducting. Sampling occurs when the switch S2 opens, shortly before S1 opens. Suppose switching times of both transistors are short enough to avoid charge re-equilibration. When S2 opens, the charge in the inversion layer divides equally between C2 and ground. When S1 opens, since C2 is already open-ended, the signal dependent charge of S1 has no other way out than to flow back to the generator. Thus, C2 stores only charge from S2, which is constant since the lower end of C2 is tied always to ground. In fact, one exchanges a signal-dependent offset against a constant offset, which does not impair linearity. This offset moreover can be compensated easily by taking advantage of differential architectures that turn a constant offset into a common mode signal, which is ignored further on.

TRANSISTOR SWITCHES OTHER THAN LOGIC GATES Transistor switches are extensively used in applications other than logic gates, covering a wide variety of both digital and analog applications. A typical illustration is the circuit converting the frequency of a signal into a proportional current, the so-called diode pump. This circuit (Fig. 14.2.10) consists of a capacitor C, two diodes D1 and D2, and a switch formed by a transistor T1 and a resistor R. The transistor is assumed to be driven periodically by a square-wave source, alternatively on and off. When T1 is blocked, the capacitor C charges through the diode D1, while D2 has no effect. As soon as the voltage across C has reached its steady-state value Ecc, T1 may be turned on abruptly. The voltage with respect to ground at point A becomes negative, and D1 is blocked, while D2 is forward-biased. The capacitor thus discharges itself in the load (in Fig. 14.2.10 an ammeter, but it could be any other circuit element that does not exhibit storage), allowing VA to reach 0 V before T1 is again turned FIGURE 14.2.10 Diode-pump circuit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.23

SWITCHES SWITCHES

14.23

off. The charge fed to the load thus amounts to CEcc coulombs. If we suppose that this process is repeated periodically, the average current in the load is given by I = fCEcc

(15)

where f represents the switching repetition rate. The diode-pump circuit provides a pulsed current whose average value is proportional to the frequency of the square-wave generator controlling the switching transistor T1. The proportionality would of course be lost if the load exhibited storage, e.g., if the load were a parallel combination of a resistor and a capacitor in order to obtain the average current. Using an operational amplifier, as shown in the right side of Fig. 14.2.10, circumvents the problem. The requirements on the switching transistor in this application are different and in many respects more stringent than for logic gates. The transistor in a logic circuit provides a way of defining two well-distinguished states, logic 1 and 0. Nothing further is required whether these states approach an actual short circuit or an open circuit. In the diode-pump circuit, however, the actual switching characteristics are important, since the residual voltage drop across the saturated transistor of Fig. 14.2.10 influences the charge transfer from C to the load, thereby also introducing unwanted temperature sensitivity. The main difference lies in the fact that while T1 is operated as a logical element, the purpose of the circuit actually is to deliver an analog signal. There are many other examples where the characteristics of switching transistors influence the accuracy of given circuits or instruments. An even more critical problem pertains to amplitude gating, since this class of applications requires switches which correctly transfer analog signals without introducing too much distortion. Furthermore, positive and negative signals must be transmitted equally well, and noise introduced by the gating signals must be minimized. Analog gating. A typical high-frequency gating network for analog signals is shown in Fig. 14.2.11. Gating is performed by means of the diode bridge in the center. All remaining circuitry controls the on-off switching. In order to transmit the analog signal, the current sources Qbar and Q must, respectively, be on and off. The current 2I from transistor T1 is split into two equal components, one that flows through T3, the other that flows vertically through T4 and the bridge. The second forward biases all the diodes. Current, moreover, is injected horizontally from the signal source, left, to the output terminal, right. Those in- and out-currents representing the analog signal are equal since the sum of all currents injected in the bridge must necessarily be zero and the vertical current components though the bridge are balanced by the network. Voltage drops across the diodes are supposed to compensate each other. When the path between source and load must be interrupted, the current sources Q and Qbar take opposite states. No current then flows through the bridge and the extra-currents supplied by T2 and T3 are diverted, respectively, through T6 and T5. The two vertical nodes of the bridge are now connected to low impedance nodes so that the equivalent high-frequency network between in- and output terminals consist actually of two branches, each with two small parasitic capacitances representing series reverse-biased diodes short circuited in their middle to ground. This ensures an excellent separation between in- and output terminals making this type of gating network well suited for the sampling of high-frequency signals, like those used in sampling oscilloscopes. Field-effect transistors also are extensively used to perform analog gating. A typical application is switched-capacitor filters. Figure 14.2.12a illustrates an elementary switched-capacitor network. In this circuit, the capacitor C is connected alternatively between the two terminals so that a charge C(V1 – V2) is transferred at each cycle. Hence, if the repetition rate is f, the switched-capacitor network allows an average direct-current C(V1 – V2)f to flow from one terminal to the other. It is thus equivalent to a resistor whose value is 1/Cf. If another capacitor Co is connected at the output port, an elementary sampled-data RC circuit is built with a time constant equal to Co/Cf. This time constant depends only on the clock frequency f and on the ratio of two capacitors. Hence, relatively large time constants can be achieved with good accuracy using very small capacitors and MOS transistor switches. In practice, MOS capacitors of a few picofarads match better than 0.1 percent. In addition, a slight modification (see Fig. 14.2.12b) of the circuit avoids the stray capacitance, which would otherwise affect the accuracy adversely. Hence, fully integrated switched-capacitor RC active filters can be designed to tight specifications, e.g., for telephone applications.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.24

SWITCHES

FIGURE 14.2.11 High-frequency gating network.

FIGURE 14.2.12 (a) Switched capacitor resistor; (b) the circuit is not affected by stray capacitances.

14.24 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.25

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 14.3

ACTIVE WAVEFORM SHAPING Paul G. A. Jespers

ACTIVE CIRCUITS Linear active networks used currently for waveshaping take advantage of negative or positive feedback circuits to improve the performance. Of the linear negative feedback active wave-shaping circuits, the operational amplifier-integrator is widely used.

RC OPERATIONAL AMPLIFIER-INTEGRATOR In Fig. 14.3.1 it is assumed that the operational amplifier has infinite input impedance, zero output impedance, and a high negative gain A. The overall transfer function is A 1 + p(1 − A)T

where T = RC

(1)

This function represents a first-order system with gain A and a cutoff frequency, which is approximately |A| times lower than the inverse of the time constant T of the RC circuit. In Fig. 14.3.1b the frequency response of the active circuit is compared with that of the passive RC integrator. The widening of the spectrum useful for integration is clearly visible. For instance, an integrator using an operational amplifier with a gain of 104 and an RC network having a 0.1 s time constant has a cutoff frequency as low as 1.6 MHz. In the time domain, the Taylor expansion of the amplifier-integrator response to the step function is V (t ) = E

 t  t t2 + −  1 − T  2! | A | T 3!(| A | T )2 

(2)

This shows that almost any desired degree of linearity of V(t) can be achieved by providing sufficient gain.

SWEEP GENERATORS4 Sweep generators (also called time-base circuits) produce linear voltage or current ramps versus time. They are widely used in applications such as oscilloscopes, digital voltmeters, and television. In almost all circuits the linearity of the ramp results from charging or discharging a capacitor through a constant-current source. The difference between circuits used in practice rests in the manner of realizing the constant-current source. Sweep generators may also be looked upon as integrators with a constant-amplitude input signal. The latter point of view shows that RC operational amplifier-integrators provide the basic structure for sweep generation. 14.25 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.26

ACTIVE WAVEFORM SHAPING 14.26

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.1 frequency.

(a) Operational amplifier-integrator; (b) gain vs. angular

Circuits delivering a linear voltage sweep fall into two categories, the Miller time base and bootstrap time base. A simple Miller circuit (Fig. 14.3.2) comprises a capacitor C in a feedback loop around the amplifier formed by T1. Transistor T2 acts like a switch. When it is on, all the current flowing through the base resistor RB is driven to ground, keeping T1 blocked, since the voltage drop across T2 is lower than the normal base-to-emitter voltage of T1. The output signal VCE of T1 is thereby clamped at the level of the power-supply voltage Ecc, and the voltage drop across the capacitor C is approximately the same. When T2 is turned off, it drives T1 into the active region and

FIGURE 14.3.2 Miller sweep generator: (a) circuit; (b) input and output vs. time.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.27

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.27

FIGURE 14.3.3 Bootstrap sweep generator: (a) circuit; (b) input and output vs. time.

causes collector current to flow through RL. The resulting voltage drop across RL is coupled capacitively to the base of T1, tending to minimize the base current; i.e., the negative-feedback loop is closed. The collector-to-emitter voltage VCE of T1 subsequently undergoes a linear voltage sweep downward, as illustrated in Fig. 14.3.2b. The circuit behaves in the same manner as the RC operational amplifier above. Almost all the current flowing through RB is derived through the feedback capacitor, and only a very small part is used for controlling the base of T1. The feedback loop opens when T1 enters into saturation, and the voltage gain of the amplifier becomes small. When T2 is subsequently turned on again, blocking T1 and recharging C through RL and the saturated switch, the output voltage VCE rises again according to an exponential with time constant RLC. Figure 14.3.3 shows a typical bootstrap time-base circuit. It differs from the Miller circuit in that the capacitor C is not a part of the feedback loop. Instead the amplifier is replaced by an emitter-follower delivering an output signal Vout which reproduces the voltage drop across the capacitor. C is charged through resistor RB from a floating voltage source formed by the capacitor C0(C0 is large compared with C). First, we consider that the switch T2 is on. Current then flows through the series combination formed by the diode D, the resistor RB, and the saturated transistor T2. The emitter follower T1 is blocked since T2 is saturated. Moreover, the capacitor C0 can charge through the path formed by the diode D and the emitter resistor RE, and the voltage drop across its terminals is equal to ECC. When T2 is cut off, the current through RB flows into the capacitor C, causing the voltage drop across its terminals to rise gradually, driving T1 into the active region. Because T1 is a unity-gain amplifier, Vout is a replica of the voltage drop across C. Since C0 acts as a floating dc voltage source, diode D is reverse-biased immediately. The current flowing through RB is supplied exclusively by C0. Since C0 >> C, the voltage across RB remains practically constant and equal to the voltage drop across C0 minus the base-to-emitter voltage of T1. Considering that the base current of T1 represents only a small fraction of the total current flowing through RB, it is evident that the charging of capacitor C occurs under constant-current and that therefore a linear voltage ramp is obtained as long as the output voltage of T1 is not clamped to the level of the power-supply voltage ECC. The corresponding output waveforms are shown in Fig. 14.3.3b. After T2 is switched on again, C discharges rapidly, causing Vout to drop, while the diode D again is forward-biased and the small charge lost by C0 is restored. In practice, C0 should be at least 100 times larger than C to ensure a quasi-constant voltage source. More detailed analysis of the Miller and bootstrap sweep generators reveals that they are in fact equivalent. We redraw the Miller circuit as shown at the left of Fig. 14.3.4. Remembering that the operation of the sweep generator is independent of which output terminal is grounded, we ground the collector of T1 and redraw the corresponding circuit. As shown at the right in the figure, this is a bootstrap circuit, so that the two circuits are equivalent. Any sweep generator can be regarded as a simple loop (Fig. 14.3.5) comprising a capacitor C delivering a voltage ramp, a loading resistor RB, and the series combination of two sources: a constant voltage source Ecc and a variable source whose emf E reproduces the voltage drop V across the capacitor. The voltage drop across

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.28

ACTIVE WAVEFORM SHAPING 14.28

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.4 Equivalency of the Miller and bootstrap sweep generators.

RB consequently remains constant and equal to Ecc making the loop current also constant. The voltage ramp consequently is given by E = V = (Ecc/RBC)t

(3)

Grounding terminal 1 yields the Miller network, while grounding terminal 2 leads to the bootstrap circuit. Since linearity is one of the essential features of sweep generators, we consider the equivalent networks represented in Fig. 14.3.6. Starting with the Miller circuit, we determine the impedance in parallel with C |A|(RB || h11)

(4)

where |A| is the absolute value of the voltage gain of the amplifier | A| = (h21/h11)RL Next, considering the bootstrap circuit, we calculate the input impedance of the unity-gain amplifier to determine the loading impedance acting on C. This impedance is FIGURE 14.3.5 Basic loop of sweep-generator circuits.

RLh21RB /(RB + h11)

(5)

which turns out to be the same as that given in Eq. (4); i.e., the two circuits are equivalent. To determine the degree of linearity it is sufficient to consider the common equivalent circuit of Fig. 14.3.7 and to calculate the Taylor expansion of the voltage V V=

 ECC  t t2 − . . . + t 1 − 2 RBC  2! | A| ( RB || h11 )C 3![| A| ( RB || h11 )C ] 

FIGURE 14.3.6 Equivalent forms of (a) Miller and (b) bootstrap sweep circuits.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(6)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.29

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

FIGURE 14.3.7 Common equivalent circuit of sweep generators.

14.29

FIGURE 14.3.8 Typical sample-and-hold circuit.

The higher the voltage gain |A|, the better the linearity. Thus, an integrated operational amplifier in place of T1 leads to excellent performance in both the Miller and the bootstrap circuit. Voltage gains as high as 10,000 are easily obtained for this purpose.

SAMPLE-AND-HOLD CIRCUITS4 Sample-and-hold circuits are widely used to store analog voltages accurately over time ranging from microseconds to minutes. They are basically switched-capacitor networks, but since the analog voltage across the storage capacitor in the hold mode must be sensed under low impedance, a buffer amplifier is needed. Op-amps with FET input are commonly used for this purpose to minimize the hold-mode droop. The schematic of a widely used integrated circuit is shown in Fig. 14.3.8. Storage and readout are achieved by the FET input op-amp in the hold mode. During the acquisition time, transistor S2 is conducting, while S1 is blocked. Current is supplied by the voltage-dependent current source to minimize the voltage difference between input and output terminals. As soon as S1 and S2 change their states, Vout ceases to follow Vin and remains unchanged. The main requirements for sample-and-hold circuits are low hold-mode droop, short settling time in the acquisition mode, low offset voltage, and small hold-mode feedthrough. The hold-mode droop is dependent on the leakage current of the op-amp inverting node. Short settling times require high-slew-rate op-amps and large current-handling capabilities for both the current source and the op-amp. The offset voltage is determined by the differential amplifier which controls the current source. Finally, feedthrough is a result of imperfect isolation between the current source and the op-amp. For this reason, a double switch is preferred to a single series switch. Another important feedthrough problem is related to the unavoidable gate-to-source or drain-overlap capacitance of the MOS switch S2. When the gate-control signal is switched off, some small charge is always transferred capacitively to the storage capacitor and a small voltage step is superimposed on the output terminal when the circuit enters the hold state. Minimization of this effect can be achieved by increasing the ratio of the storage capacitance to the switch-overlap capacitance. Since the latter cannot be made equal to zero, the storage capacitance must be chosen sufficiently large, but this inevitably lengthens the settling time. One means of alleviating the problem is to compensate the switching charge because of the control signal by injection of an equal and opposite charge on the inverting input node of the op-amp. This can be achieved by means of a dummy transistor controlled by the inverted signal.

NONLINEAR NEGATIVE FEEDBACK WAVEFORM SHAPING The use of nonlinear devices with negative feedback accentuates the character of waveshaping networks. In many circumstances, this leads to an idealization of the nonlinear character of the devices considered. A good example of this is given by the ideal rectifier circuit. The negative-feedback loop in the circuit shown in Fig. 14.3.9 is formed by two parallel branches. Each comprises a diode connected in such manner that if V1 is positive, the current injected by resistor R1 flows

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.30

ACTIVE WAVEFORM SHAPING 14.30

PULSED CIRCUITS AND WAVEFORM GENERATION

through D1, and if V1 is negative, through D2. A resistor R2 is placed in series with D1, and the output voltage V2 is taken at the node between R2 and D1. Hence, V2 is given by –(R2/R1)V1 when V1 is positive, independently of the forward voltage drop across D1. When D1 is forward-biased, the voltage V at the output of the op-amp adjusts itself to force the current flowing through D1 and RL to be exactly the same as through R1. This means that V may be much larger than V2, especially when V2 (and thus also V1) is of the order of millivolts. In fact, V exhibits approximately the same shape as V2 plus an additional pedestal of approximately 0.6 to 0.7 V. Typical waveforms obtained with a sinusoidal voltage of a few tens of millivolts are shown in Fig. 14.3.10. FIGURE 14.3.9 The precision rectifier using negative The quasi-ideal rectification characteristic of this cirfeedback is almost an ideal rectifier. cuit is readily understood by considering the Norton equivalent network seen from R2 and D1 in series. In consists of a current source delivering the current V1/R1 in parallel with an almost infinite resistor |A| R, where A represents the voltage gain of the op-amp. Hence, the current flowing through the branch formed by R2 and D1 is delivered by a quasi-ideal current source, and the voltage drop across R2 is unaffected by the series diode D1. As for D2, it is required to prevent the feedback loop from being opened when V1 is negative. If this could happen, the artificial ground at the input of the op-amp would be lost and V2 would not be zero. Other negative-feedback configurations leading to very high output impedances are equally powerful in achieving ideal rectification characteristics. For instance, the unity-gain amplifier used in instrumentation has wide linear ac measurement capabilities (Fig. 14.3.11).

POSITIVE FEEDBACK WAVEFORM SHAPING Positive feedback is used extensively in bistable, monostable, and astable (free-running) circuits. Astable networks include free-running relaxation circuits whether self-excited or synchronized by external trigger pulses. Monostable and bistable circuits also exist, with one and two distinct stable states, respectively. The degree to which positive feedback is used in harmonic oscillators differs substantially from that of astable, monostable, or bistable circuits. In an oscillator the total loop gain must be kept close to 1. It needs to compensate only for small losses in the resonating tank circuit. In pulsed circuits, positive feedback permits

FIGURE 14.3.10 Waveforms of circuit in Fig. 14.3.9.

FIGURE 14.3.11 Feedback rectification circuit used in precision measurements.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.31

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.31

fast switching from one state to another, e.g., from cutoff to saturation and vice versa. Before and after these occur, the circuit is passive. Switching occurs in extremely short times, typically a few ns. After switching, the circuit evolves more slowly, approaching steady-state conditions. It is common practice to call the switching time the regeneration time and the time needed to reach final steady-state conditions the resolution time. The resolution time may range from tens of nanoseconds to several seconds or more, depending on the circuit. An important feature of triggered regenerative circuits is that their switching times are essentially independent of the steepness of the trigger-signal waveshape. Once instability is reached the transition occurs at a rate fixed by the total loop gain and the reactive parasitics of the circuit itself but independent of the rate of change of the trigger signal itself. Regenerative circuits, therefore, provide means of restoring short rise times. Positive-feedback pulse circuits are necessarily nonlinear. The most conventional way to study their behavior is to take advantage of piecewise-linear analysis techniques. Bistable Circuits5 (Collector Coupled) Two cascaded common-emitter transistor stages implement an amplifier with a high positive gain. Connecting the output to the input (Fig. 14.3.12) produces an unstable network known as the Eccles-Jordan bistable circuit or flip-flop. Under steady-state conditions one transistor is saturated and the other is blocked. Suppose the circuit of Fig. 14.3.12 has the value RL = 1 kΩ, R = 2.2 kΩ, and Ecc = 5 V. Suppose T1 is at cutoff, and consider the equivalent network connected to the base of T2. It can be viewed as an emf of 5 V and series resistances of 3.2 kΩ. The base current of T2 is given by IB2 = (5 – 0.7)/3.2 = 1.34 mA

(7)

T2 being saturated, the collector current is equal to ECC /RL or 5 mA. A current gain of 4 would be sufficient to ensure saturation of T2. Hence the collector-to-emitter voltage across T2 will be very small (VCE,sat), and consequently T1 will be blocked, as stated initially. The reverse situation, with T1 saturated and T2 cut off, is governed by identical considerations for reasons of symmetry. Two distinct stable states thus are possible. When one of the transistors is suddenly switched from one state to the opposite, the other transistor automatically undergoes an opposite transition. At a given time both transistors conduct simultaneously, which increases the loop gain from zero to a high positive value. This corresponds to the regenerative phase, during which the circuit becomes active. It is difficult to compute the regeneration time since the operating points of both transistors move through the entire active region, causing large variations of the small-signal parameters. Although determination of the regeneration time on the basis of a linear model is unrealistic and leads only to a rough approximation, we briefly examine this problem since it illustrates how the regeneration phase of unstable networks may be analyzed.

FIGURE 14.3.12 The Eccles-Jordan bistable circuit (flip-flop): (a) in the form of two cascaded amplifiers with output connected to input; (b) as customarily drawn, showing symmetry of connections.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.32

ACTIVE WAVEFORM SHAPING 14.32

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.13 Flip-flop circuit showing capacitances that determine time constants.

First, we introduce two capacitors in parallel with the two resistors R. These capacitors provide a direct connection from collector to base under transient conditions and hence increase the high-frequency loop again. The circuit can now be described by the network of Fig. 14.3.13, which consists of a parallel combination of two reversed transistors without extrinsic base resistances (for calculation convenience) and with two load admittances Y which combine the load and resistive input of each transistor. Starting from the admittance matrix of one of the transistors with its load, we equate the determinant of the parallel combination

p(Cp + CTC ) I − pCTC VT

− pCTC pCTC + Y

(8)

to zero to find the natural frequencies of the circuit. This leads to pCp + (I/VT) + Y = 0

(9)

p(Cp + 4CTC) + Y – I/VT = 0

(10)

and

where Cp stands for the parallel combination of CTE and the diffusion capacitance tF I/VT . Only Eq. (10) has a zero with a real positive pole, producing an increasing exponential function with time constant approximately equal to t = (Cp + 4CTC) VT /I

(11)

Since the diffusion capacitance overrules the transition capacitances at high current, Eq. (11) reduces finally to tF. This yields extremely short switching times. For instance, a transistor with a maximum transition frequency fT of 300 MHz and a tF equal to 0.53 ns, exhibits a regeneration time (defined as the time elapsing between 10 and 90 percent of the total voltage excursion from cutoff to saturation or vice versa) equal to 2.2tF , or 1.2 ns. A more accurate but much more elaborate analysis, taking into account the influence of the extrinsic base resistance and nonlinear transition capacitances in the region of small-collector current, requires a computer simulation based on the dynamic large-signal model of the bipolar transistor. Nevertheless, Eq. (11) clearly pinpoints the factors controlling the regeneration time; the transconductance and unavoidable parasitic capacitances. This is verified in many other positive-feedback switching circuits. We consider next which factors control the resolution time still with the same numerical data. We suppose T1 initially nonconducting and T2 saturated. The sudden turnoff of T2 is simulated by opening the short-circuit switch S2 in Fig. 14.3.14a. Immediately, VCE2 starts increasing toward Ecc. The base voltage VBE1 of T1 consequently rises with a time constant fixed only by the total parasitic capacitance CT at the collector of T2 and base of T1 times the resistor RL. Hence this time constant is t1 = RLCT

(12)

This time is normally extremely short; e.g., a parasitic capacitance of 1 pF yields a time constant of 1 ns. The charge accumulated across C evidently cannot change, for C is much larger than CT . So VBE1 and VCE2 increase at the same rate. This situation is illustrated in Fig. 14.3.14b. When VBE1 reaches approximately 0.7 V, T1 starts conducting and a new situation arises, illustrated in Fig. 14.3.14 by the passage from (b) to (c). This is when regeneration actually takes place, forcing T1 to go into saturation very rapidly. With the regeneration period neglected, case (c) is characterized by the time constant t2 = (RL/R)C

(13)

For instance, if C is equal to 10 pF, t2 yields 7 ns. Although this time constant is much longer than t1, it is still not the longest, for we have not yet considered the evolution of VBE2.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.33

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.33

FIGURE 14.3.14 Piecewise analysis of flip-flop switching behavior. Transistor T2 is assumed to be on before (a). The opening of S2 simulates cutoff. The new steadystate conditions are reached in (c).

It is considered in Fig. 14.3.15, where the saturated transistor T1 is replaced by the closing of S1. The problem is the same as for the compensated attenuator. Since overcompensation is achieved, VBE2 undergoes a large negative-voltage swing almost equal to Ecc before climbing toward its steady-state value 0 V. The time constant of this third phase is given by t3 = RC

(14)

In the present case, t3 equals 22 ns. The voltage variations versus time of the flip-flop circuit thus far analyzed are reviewed in Fig. 14.3.16 with the assumption that the regeneration time is negligible. Clearly C plays a double role. The first is favorable since it ensures fast regeneration and efficiently removes excess FIGURE 14.3.15 The longest time constant is expericharges from the base of the saturated transistor, but the enced when T1 is turned on. This is simulated by the closecond is unfavorable since it increases the resolution time sure of switch S1. and sets an upper limit to the maximum repetition rate at which the flip-flop can be switched. The proper choice of C as well as of RL and R must take this fact into consideration. Small values of the resistances make high repetition rates possible at the price of increased dc power consumption.

INTEGRATED-CIRCUIT FLIP-FLOPS The Eccles-Jordan circuit (Fig. 14.3.12) is the basic structure of integrated bistable circuits. The capacitor C is not present. The integrated flip-flop can be viewed as two cross-coupled single-input NOR or NAND circuits. In fact, integrated flip-flops vary only in the way external signals act upon them for control purposes. A typical example is given in Fig. 14.3.17 with the corresponding logic-symbol representation. The triggering inputs are called set S and reset R terminals. Transistors T3 and T4 are used for triggering. The truth for NOR and NAND bistable circuits are NOR bistable

NAND bistable

R

S

Q1

Q2

Line

Q1

Q2

Line

0 0 1 1

0 1 0 1

Q 1 0 0

Q 0 1 0

1 2 3 4

1 1 0 Q

1 0 1 Q

5 6 7 8

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.34

ACTIVE WAVEFORM SHAPING 14.34

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.16 Voltage variations vs. time of flipflop circuit.

Lines 1 and 8 correspond to situations where the S and R inputs are both inactive, leaving the bistable circuit in one of its two possible states indicated in the tables above by the letters Q and Q (Q may be either 1 or 0). If a specified output state is required, a pair of adequate complementary dc trigger signals is applied to the S and R inputs simultaneously. For instance, if the output pair is to be characterized by Q1 = 1 and Q2 = 0, the necessary input combination, for NOR and NAND bistable circuits, is S = 1 and R = 0. Changing S back from 1 to 0 does not change anything in the output state in the NOR bistable. The same is true if S is made equal to 1 in the NAND bistable. In both cases, the flip-flop exhibits infinite memory of the stored state. The name sequential circuit is given to this class of networks as opposed to previous circuits, which are called combinational circuits. Lines 4 and 5 must be avoided, for the passage from line 4 to line 1 or from line 5 to line 8 leads to uncertainty regarding the final state of the bistable circuit. In fact, the final transition is entirely out of the control of the input, since in both cases it results solely from small imbalances between transistor parasitics that allow faster switching of one or another inverter.

SYNCHRONOUS BISTABLE CIRCUITS 4 Sequential networks may be either synchronous or asynchronous. The asynchronous class describes circuits in which the application of an input control signal triggers the bistable circuit immediately. This is true of the circuits thus far considered. In the synchronous class, changes of state occur only at selected times, after a clock signal has occurred. Synchronous circuits are less sensitive to hazard conditions. Asynchronous circuits may be severely troubled by this effect, which results from differential propagation delays. These delays, although individually very small (typically of the order of a few nanoseconds), are responsible for introducing skew between signals that travel through different logic layers. Unwanted signal combinations may therefore appear for short periods and be interpreted erroneously.

FIGURE 14.3.17 DC-coupled version of flip-flop, customarily used in integrated-circuit versions of this circuit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.35

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.35

Synchronous circuits do not suffer from this limitation because they conform to the control signals only when the clock pulse is present, usually after the transient spurious combinations are over. A simple synchronous circuit is shown in Fig. 14.3.18. The inhibition action provided by the absence of the clock signal is provided by a pair of input AND circuits. Otherwise nothing is changed with respect to the bistable network. A difficulty occurs in cascading bistable circuits, to achieve FIGURE 14.3.18 Synchronous flip-flop. time-division. Instead of each circuit controlling its closest neighbor, when a clock signal is applied, the set and reset signals of the first bistable jump from one circuit to the next, traveling throughout the entire chain in a time which may be shorter than the duration of the clock transition. To prevent this, a time delay must be introduced between the gating NAND circuits and the actual bistable network, so that changes of state can occur only after the clock signal has disappeared. One approach is to take advantage of storage effects in bipolar transistors, but the so-called master-slave association, shown in Fig. 14.3.19, is preferred. In this circuit, intermediate storage is realized by an auxiliary clocked bistable network controlled by the complement of the clock signal. The additional circuit complexity is appreciable, but the approach is practical in integrated-circuit technology. The master-slave bistable truth table can be found from that of the synchronous circuit in Fig. 14.3.18, which in turn can be deduced from the truth table given in the previous section. One problem remains, however, the forbidden 1,1 input pair, which is responsible for ambiguous states each time the clock goes to zero. To solve this problem, the JK bistable was introduced (see Fig. 14.3.20). The main difference is the introduction of a double feedback loop. Hence the S and R inputs become, respectively, JQ and KQ. As long as the J and K inputs are not simultaneously equal to 1, nothing in fact is changed with respect to the behavior of the SR synchronous circuit. When J and K are both high, the cross-coupled output signals fed back to the input gates cause the flip-flop to toggle under control of the clock signal. The truth table then becomes

J

K

Qn +1

0 1 0 1

0 0 1 1

Qn 1 0 Qn

Line 1 2 3 4

Qn+1 stands for Q at the clock time n + 1 and Qn for Q at clock time n. Lines 1 to 3 match the corresponding lines of the NOR bistable truth table. Line 4 indicates that state transitions occur each time the clock signal goes from high to low. The corresponding logic equation of the JK flip-flop, therefore, is Qn + 1 = JQn + KQn

FIGURE 14.3.19 Master-slave synchronous flip-flop.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(15)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.36

ACTIVE WAVEFORM SHAPING 14.36

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.20 JK Flip-flop.

When only one control signal is used, J, for instance, and K is obtained by negation of the J signal, a new type of bistable is found, which is called the D flip-flop. The name given to the J input is D. Since K is equal to J, Eq. (15) reduces to Qn+1 = D

(16)

Hence, in this circuit the output is set by the input D after a clock cycle has elapsed. Notice that the flipflop is insensitive to changes of D occurring while the clock is high. D flip-flops without the master-slave configuration also exist, but their output state follows the D signal if changes occur while the clock is high. These bistables can be used to latch data. Several D flipflops controlled by the same clock form a register for data storage. The clock signal then is called an enable signal. Bistable Circuits, Emitter-Coupled Bistables (Schmitt Circuits)6 In the basic Schmitt circuit represented in Fig. 14.3.21 bistable operation is obtained by a positive-feedback loop formed by the common-base and common-collector transistor pair (respectively T1 and T2). The Schmitt circuit can be considered as a differential amplifier with a positive-feedback loop, which is a series-parallel association. Emitter-coupled bistables are fundamentally different from Eccles-Jordan circuits, since no transistor saturates in either of the two stable states. Storage effects therefore need not be considered. The two permanent states are examined in Fig. 14.3.22. In each state, (a) as well as (b), one transistor operates in the common-collector configuration while the other is blocked. In Fig. 14.3.22a, the collector voltage VC1 of T1 and base voltage VB2 of T2 are given by VC1 = Ecc FIGURE 14.3.21 Emitter-coupled Schmitt circuit, showing positive-feedback loop.

VB 2

R1 + R2 R1 + R2 + Rc

R1 = Ecc = Vh R1 + R2 + Rc

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(17)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.37

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.37

FIGURE 14.3.22 Execution of transfer in Schmitt circuit: (a) with T1 blocked; (b) with T1 conducting.

When the other stable state (b) is considered, VC1 = ( Ecc − Rc I )

VB 2

R1 + R2 R1 + R2 + Rc

R1 = ( Ecc − Rc I ) = Vl R1 + R2 + Rc

(18)

The situation depicted in Fig. 14.3.22 remains unchanged as long as the input voltage VB1 applied to T1 is kept below the actual value Vh. In the other state (b), T2 will be off as long as VB1 is larger than Vl. A range of input voltages between Vh and Vl thus exists where either of the two states is possible. To alleviate the ambiguity, let us consider an input voltage below the smallest of the two possible values of VB2 so that the transistor T1 necessarily is blocked. This corresponds to the situation of Fig. 14.3.22a. Now let the input voltage be gradually increased. Nothing will happen until VB1 approaches Vh. When the difference between the two base voltages is reduced to 100 mV or less, T1 will start conducting and the voltage drop across Rc will lower VB2. The emitter current of T2 will consequently be reduced, and more current will be fed back to T1. Hence, an unstable situation is created, which ends when T1 takes over all the current delivered by the current source and T2 is blocked. Now the situation depicted in Fig. 14.3.22b is reached. The base voltage of T2 becomes Vl, and the input voltage may either continue to increase or decrease without anything else happening as long as VB1 has not reached Vl. When VB1 approaches Vl, another unstable situation is created causing the switching from (b) to (a). Hence, the input-output characteristic of the Schmitt trigger is as shown in Fig. 14.3.23 with a hysteresis loop. Schmitt triggers are suitable for detecting the moment when an analog signal crosses a given DC level. They are widely used in oscilloscopes to achieve time-base synchronization. This is illustrated in Fig. 14.3.24, which FIGURE 14.3.23 Input-output characteristic of Schmitt shows a periodic signal triggering a Schmitt circuit and the circuit, showing rectangular hysteresis. corresponding output waves. It is possible to modify the switching levels by changing the operating points of the transistors electrically, e.g., by modifying the current delivered by the current source. In many applications, the width of the hysteresis does not play a significant role. The width can be decreased, however, by increasing the attenuation of the resistive divider formed by R1 and R2, but one should not go below 1 V because sensitivity to variations in circuit components or supply voltage may occur.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.38

ACTIVE WAVEFORM SHAPING 14.38

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.24 Trigger input and output voltage of Schmitt circuit; solid line delivered by a current source; broken line delivered by a resistor.

FIGURE 14.3.25 Bipolar integrated version of a comparator.

Furthermore, the increased attenuation in the feedback loop must be compensated for by a corresponding increase in the differential amplifier gain. Otherwise the loop gain may fall below 1, preventing the Schmitt circuit from functioning. A hysteresis of a few millivolts is therefore difficult to achieve. A much better solution is to use comparators instead of Schmitt triggers when hysteresis must be avoided. A typical integrated comparator is shown in Fig. 14.3.25. It is a medium-gain amplifier (103 to 104) with a very fast response ( a few nanoseconds) and an excellent slew rate. Comparators are not designed to be used as linear amplifiers like op-amps. Their large gain-bandwidth product makes them inappropriate for feedback configurations. They inevitably oscillate in any type of closed loop. In the open-loop configuration they behave like clipping circuits with an exceedingly small input range, which is equal to the output-voltage swing, usually 5 V, divided by the gain. The main difference compared with Schmitt triggers is the fact that comparators do not exhibit hysteresis. This makes a significant difference when considering very slowly varying input signals. In the circuit of Fig. 14.3.21 the common-emitter current source can be replaced by a resistor. This solution introduces some common-mode sensitivity. The output signal does not look square, as shown in Fig 14.3.24 by the dashed lines. If unwanted, this effect can be avoided by taking the output signal at the collector of T2 through an additional resistor, since the current flowing through T2 is constant. An additional advantage of the latter circuit is that the output load does not interfere with the feedback loop.

INTEGRATED-CIRCUIT SCHMITT TRIGGERS7 Basically a Schmitt trigger can always be implemented by means of an integrated differential amplifier and an external positive-feedback loop. If the amplifier is an op-amp, poor switching characteristics are obtained unless an amplifier with a very high slewing rate is chosen. If a comparator is considered instead of an op-amp, switching will be fast but generally the output signal will exhibit spurious oscillations during the transition period. The oscillatory character of the output signal is due to the trade-off between speed and stability which is typical of comparators compared with op-amps. Any attempt to create a dominant pole, in fact, inevitably would ruin their speed performance.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.39

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.39

The integrated-circuit counterpart of the Schmitt trigger is shown in Fig. 14.3.26. It consists of two comparators connected to a resistive divider formed by three equal resistors R. Input terminals 2 and 6 are normally tied together. The output signals of the two comparators control a flip-flop. When the input voltage is below Ecc/3, the flip-flop is set. Similarly when the input voltage exceeds 2Ecc/3, the circuit is reset. The actual state of the flip-flop in the range between Ecc/3 and 2Ecc/3 will depend on how the input voltage enters the critical zone. For instance, if the input voltage starts below Ecc/3 and is increased so that it changes the state of comparator C2 but not that of comparator C1, both S and R are equal to 1 and the flip-flop remains set. The FIGURE 14.3.26 Precision integrated Schmitt trigger. state changes only if the input voltage exceeds the limit 2Ecc/3. Similarly, if the input voltage is lowered, setting the flip-flop will occur only when Ecc/3 is reached. Hence the circuit of Fig. 14.3.26 behaves like a Schmitt trigger with a hysteresis width Ecc/3 depending only on the resistive divider 3R and the offset voltages of the two comparators C1 and C2. This circuit can therefore be considered as a precision Schmitt trigger and is widely used as such.

Monostable and Astable Circuits (Discrete Components)4 Figures 14.3.27 and 14.3.28 show monostable and astable collector-coupled pairs, respectively. The fundamental difference between these circuits and bistable networks lies in the way DC biasing is achieved. In Fig. 14.3.27a, T2 is normally conducting except when a negative trigger pulse drives this transistor into the cutoff region. T1 necessarily undergoes the inverse transition, suddenly producing a large negative voltage step at the base of T2 shown in Fig. 14.3.27b. VBE2, however, cannot remain negative since its base is connected to the positive-voltage supply through the resistor R1. The base voltage rises toward Ecc, with a time constant R1C. As soon as the emitter junction of T2 becomes forward-biased, the monostable circuit changes its state again and the circuit remains in that state until another trigger signal is applied. The time T between the application of a trigger pulse and the instant T2 saturates again is given approximately by T = t ln 2 = 0.693t

FIGURE 14.3.27 Monostable collector-coupled pair: (a) circuit; (b) output vs. time characteristics.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(19)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.40

ACTIVE WAVEFORM SHAPING 14.40

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.28 istics.

Astable (free-running) flip-flop: (a) circuit; (b) output vs. time character-

where t = R1C. The supply voltage ECC is supposed to be large compared with the forward-voltage drop of the emitter junction of T2 for this expression to apply. The astable collector-coupled pair, or free-running multivibrator (Fig. 14.3.28), operates according to the same scheme except that steady-state conditions are never reached. The base-bias networks of both transistors are connected to the positive power supply. The period of the multivibrator thus equals 2T if the circuit is symmetrical, and the repetition rate F, is given by Fr =

1 0.7 ≈ 2 τ ln 2 RC

(20)

INTEGRATED MONOSTABLE AND ASTABLE CIRCUITS7 The discrete component circuits discussed above are interesting only because of their inherent simplicity and exemplative value. Improved means to integrate monostable and astable are shown below. The circuit shown in Fig. 14.3.29 is derived from the Schmitt trigger. It is widely used in order to implement high-frequency (100 MHz) relaxation oscillators. The capacitor C provides a short circuit between the emitters of the two transistors, closing the positive-feedback loop during the regeneration time. As long as one or the other of the two transistors is cut off, C offers a current sink to the current source connected to the emitter of the blocked transistor. The capacitor thus is periodically charged and discharged by the two current sources, and the voltage across its terminal exhibits a triangular waveform. The collector current of T1 is either zero or I1 + I2, so that the resulting voltage step across Rc is (RB || RC)(I1 + I2). Since the base of T2 is directly connected to the collector of T1, the same voltage step controls T2 and determines the width of the input hysteresis, i.e., the maximum amplitude of the voltage sweep across C. The period of oscillation is computed from

FIGURE 14.3.29 Discrete-component emitter-coupled astable circuit.

1 1 T = C  +  ( RB || RC )( I1 + I 2 )  I1 I 2 

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(21)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.41

ACTIVE WAVEFORM SHAPING ACTIVE WAVEFORM SHAPING

14.41

FIGURE 14.3.30 Waveforms of circuit in Fig. 14.3.29.

When, as is usual, both current sources deliver equal currents, the expression for T reduces to T = 4(RB || RC)C

(22)

T does not depend, in this case, on the amplitude of the current because changes in current in fact modify the amplitude and the slope of the voltage sweep across C in the same manner. A review of the waveforms obtained at various points of the circuit is given in Fig. 14.3.30. Integrated monostable and astable circuits can be derived from the precision Schmitt trigger circuit shown in Fig. 14.3.26. The monostable configuration is illustrated in Fig. 14.3.31. Under steady-state conditions, the flip-flop is set and the transistor T saturated. The input voltage Vin is kept somewhere between the two triggering levels Ecc/3 and 2Ecc/3. To initiate the monostable condition, it is sufficient that Vin drops below Ecc/3 even for a very short time, in order to reset the flip-flop and prevent T from conducting. The current flowing through R1 then charges C until the voltage VC reaches the triggering level

FIGURE 14.3.31 Monostable precision Schmitt trigger: (a) circuit; (b) waveforms.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.42

ACTIVE WAVEFORM SHAPING 14.42

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.3.32 Astable precision Schmitt trigger: (a) circuit; (b) waveforms.

2Ecc/3. Immediately thereafter, the circuit switches to the opposite state and transistor T discharges C. The monostable circuit remains in that state until a new triggering pulse Vin is fed to the comparator C2. The waveforms Vin, VC, and Vout are shown in Fig. 14.3.31b. This circuit is also called a timer because it provides constant-duration pulses, triggered by a short input pulse. A slight modification of the external control circuitry may change this monostable into a retriggerable timer. The astable version of the precision Schmitt trigger is shown in Fig. 14.3.32. Its operation is easily understood from the preceding discussion. The capacitor C is repetitively discharged through R2 in series with the saturated transistor T and recharged through R1 + R2. The voltage VC therefore consists of two distinct exponentials clamped between the triggering levels Ecc/3 and 2Ecc/3. The frequency is equal to 1.44/(R1 + 2R2)C. Because of the precision of the triggering levels (10–3 to 10–4) short-term frequency stability can be achieved.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.43

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 14.4

DIGITAL AND ANALOG SYSTEMS Paul G. A. Jespers

INTEGRATED SYSTEMS With the trend toward ever higher integration levels, an increasing number of ICs combine some of the circuits seen before in order to build large systems, digital as well as analog, or mixed, such as wave generators and A/D and D/A converters. Some of these are reviewed below.

COUNTERS4,7 To count any number N of events, at least k flip-flops are required, such that 2k ≥ N

(1)

Ripple Counters JK flip-flops with J and K inputs equal to 1 are divide-by-2 circuits. Hence, a cascade of k flip-flops with each output Q driving the clock of the next circuit forms a divide-by-2k chain, or a binary counter (see Fig. 14.4.1). The main drawback of this circuit is its increasing propagation delay with k. When all the flip-flops switch, the clock signal must ripple through the entire counter. Hence, enough time must be allowed to obtain the correct count. Furthermore, the delays between the various stages of the counter may produce glitches, e.g., when parallel decoding is achieved. Synchronous Binary Counters Minimization of delay and glitches can be achieved by designing synchronous instead of asynchronous counters. In a synchronous counter all the clock inputs of the JK flip-flops are driven in parallel by a single clock signal. The control of the counter is achieved by driving the J input by means of the AND combination of all the preceding Q outputs, as shown in Fig. 14.4.2. In this manner, all state changes occur on the same trailing edge of the clock signal. The only remaining requirement is to allow enough time between clock pulses for the propagation through a single flip-flop and an AND gate. The drawback of course is increased complexity with the order k. Divide-by-N Synchronous Counters When the number N of counts cannot be expressed in binary form, auxiliary decoding circuitry is required. This is true also for counters that provide truncated and irregular count sequences. Their synthesis is based 14.43 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.44

DIGITAL AND ANALOG SYSTEMS 14.44

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.1 Ripple counter formed by cascading flip-flops.

FIGURE 14.4.2 Synchronous counter.

on the so-called transition tables. The basic transition table of the JK flip-flop is derived easily from its truth table. Qn

Qn +1

J

K

Line

0 0 1 1

0 1 0 1

0 1 × ×

× × 1 0

1 2 3 4

Line 1, for instance, means that in order to maintain Q equal to 0 after a clock signal has occurred, the J input must be made equal to 0 whatever K may be (X stands for “don’t care”). This can be easily verified with the truth table (lines 1 and 3). Hence, the synthesis of a synchronous counter consists simply of determining the J and K inputs of all flip-flops that are needed to obtain a given sequence of states. Once the J and K truth tables have been obtained, classical minimization procedures can be used to synthesize the counter. For instance, consider the design of a divide-by-5 synchronous counter for which a minimum of three flip-flops is required. First, the present and next states of the flip-flops are listed. Then the required J and K inputs are found by means of the JK transition table: Present state Q3 Q2 Q1

Next state Q3 Q2 Q1

J3

K3

0 0 0 0 1

0 0 0 1 0

0 0 0 1 ×

× × × × 1

0 1 2 3 4

0 0 1 1 0

0 1 0 1 0

0 1 1 0 0

1 0 1 0 0

JK inputs J2 K2 0 1 × × 0

× × 0 1 ×

J1

K1

1 × 1 × 0

× 1 × 1 ×

Using Karnaugh minimization techniques, one finds J3 = Q1Q2 K3 = 1 K2 = Q1 J2 = Q1 J1 = Q3 K1 = 1 The corresponding counter is shown in Fig. 14.4.3. With this procedure it is quite simple to synthesize a decimal counter with four flip-flops. The same method also applies to the synthesis with D flip-flops.

Up-Down Counters Upward and downward counters differ from the preceding ones only by the fact that an additional bit is provided to select the proper J and K controls for up or down count. The same synthesis methods are applicable. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.45

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.45

Presettable Counters Since a counter is basically a chain of flip-flops, parallel loading by any number within the count sequence is readily possible. This can be achieved by means of the set terminals. Hence, the actual count sequence can be initiated from any arbitrary number.

SHIFT REGISTERS3 Shift registers are chains of flip-flops connected so that the state of each can be transferred to its next left or right neighbor under control of the clock signal. Shift registers can be built with JK as well as D flip-flops. An example of a typical bidirectional shift register is shown in Fig. 14.4.4. Shift registers, such as counters, may be loaded in parallel or serial mode. This is also true for reading out. In MOS technology, dynamic shift registers can be implemented in a simple manner. Memorization of states occurs electrostatically rather than by means of flip-flops. The information is stored in the form of charge on the gate of an MOS transistor. The main advantage of MOS dynamic shift registers is area saving resulting from the replacement of flip-flops by single MOS transistors. A typical dynamic 2-phase shift register (Fig. 14.4.5) conFIGURE 14.4.3 Synchronous divide-by-5 circuit. sists of two cascaded MOS inverters connected by means of series switches. These switches, T3 and T6, are divided into two classes: odd switches controlled by the clock f1, and even switches controlled by f2. The control signals determine two nonoverlapping phases. When f1 turns on the odd switches, the output signal of the first inverter controls the next inverter but T6 prevents the information from going further. When f2 turns on, data are shifted one half cycle further. The signal jumps from one inverter to the next until it reaches the last stage.

FIGURE 14.4.4 Bidirectional shift register.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.46

DIGITAL AND ANALOG SYSTEMS 14.46

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.5 A 2-phase MOS dynamic shift register.

MULTIPLEXERS, DEMULTIPLEXERS, DECODERS, ROMS, AND PLAS3–8 A multiplexer is a combinatorial circuit that selects binary data from multiple input lines and directs them to a single output line. The selected input line is chosen by means of an address word. A representation of a 4-input multiplexer (MUX) is shown in Fig. 14.4.6 as well as a possible implementations. MOS technology lends itself to the implementation of multiplexers based on pass transistors. In the example illustrated by Fig. 14.4.7 only a single input is connected to the output through two conducting series transistors according to the S1 S0 code address. Multiplexers may be used to implement canonical logical equations. For instance, the 4-input MUX considered above corresponds to the equation y = x 0 S0 S1 + x1S0 S1 + x 2 S0 S1 + x3S0 S1

FIGURE 14.4.6 A 4-input multiplexer (MUX) with a 2-bit (S1S0) address.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(3)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.47

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.47

FIGURE 14.4.7 (a) NMOS and (b) CMOS implementations of multiplexers.

Hence, logical functions of the variables S0 and S1 can be synthesized by means of multiplexers. For instance, an EXOR circuit corresponds to x0 = x3 = 0 and x1 = x2 = 1. Demultiplexers (DEMUX) perform the inverse passage from a single input line to several output lines under the control of an address word. They implement logic functions that are less general than those performed by multiplexers because only miniterms are involved. The symbolic representation of a 4-input DEMUX is shown in Fig. 14.4.8 with a possible implementation. The MUX represented in Fig. 14.4.7 may seem attractive for the purpose to achieve demultiplexation but one should not forget that when a given channel is disconnected, the data across the corresponding output remain unchanged. In a DEMUX, unselected channels should take a welldefined state, whether 0 or 1.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.48

DIGITAL AND ANALOG SYSTEMS 14.48

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.8 A 4-output demultiplexer (DEMUX).

A decoder is a DEMUX with a constant input. Decoders are currently used to select data stored in memories. Figure 14.4.9 shows a simple NMOS decoder. In this circuit, all unselected outputs are grounded while the selected row is at the logical 1. Any row may be viewed as a circuit implementing a miniterm. For instance, the first row corresponds to y0 = S1 + S0 = S1 S0

FIGURE 14.4.9 An NMOS decoder circuit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(4)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.49

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.49

FIGURE 14.4.10 A ROM memory.

When a decoder drives a NOR circuit, a canonical equation is obtained again. Figure 14.4.10 shows a decoder driving several NOR circuits which are implemented along vertical columns. This circuit is called a read-only memory (ROM). One can easily recognize an EXOR function ( y1) and its complement (y0) in the example. The actual silicon implementation strictly follows the pattern illustrated by Fig. 14.4.10. Notice that the decoder block and ROM column block both have the same structure after turning them by 90°. When a ROM is used to implement combinatorial logic, the number of outputs usually is restricted to only those miniterms which are necessary to achieve the desired purpose. The ROM is then called a programmable logic array (PLA) ROMs as well as PLAs are extensively used in integrated circuits because they provide the means to implement logic in a very regular manner. They contribute substantially to area minimization and lend themselves to automatic layout, reducing design time (silicon compilers). A large number of functions can be implemented by means of the circuits described above: circuits converting the format of digital data, circuits coding pure binary into binary-coded decimal (BCD) or decimal data, and so forth. The conversion from a 4-bit binary-coded number into a 7-segment display by means of a PLA is illustrated by Fig. 14.4.11. All the required OR functions are obtained by means of the right-plane decoder, while the left one determines the AND functions (they are called, respectively, OR-AND planes). In applications where multiple-digit displays are required, rather than repeating the circuit of Fig. 14.4.11 as many times as there are digits, a combination of MUX and DEMUX circuit and a single PLA can be used. An example is shown in Fig. 14.4.12. The 4-input codes representing 4 digits are multiplexed in a quadruple MUX in order to drive a PLA 7-segment generator. Data appear sequentially on the seven common output lines connected to the four displays. Selection of a given display is achieved by a decoder driven by the same address as the MUX circuit. If the cyclic switching is done at a speed of 60 Hz or more, the human eye cannot detect the flicker.

MEMORIES 3 Memories provide storage for large quantities of binary data. Individual bits are stored in minimum-size memory cells that can be accessed within two- or three-dimensional arrays. Read-only memories (ROMs) provide access only to permanently stored data. Programmable ROMs (PROMs) allow data modifications only during special write cycles, which occur much less frequently than readout cycles. Random access memories (RAMs) provide equal opportunities for read and write operations. RAMs store data in bistable circuits (flip-flops, SRAMs), or in the form of charge across a capacitor (DRAM).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.50

DIGITAL AND ANALOG SYSTEMS 14.50

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.11 A 7-segment PLA driver.

Static Memories Static memory cells are arrays of integrated flip-flops. The state of each flip-flop represents a stored bit. Readout is achieved by selecting a row (word-line: WL) and a column (but or bit-lines: BL) crossing each other over the chosen cell. Writing occurs in the same manner. A typical six-transistor MOS memory cell is shown in Fig. 14.4.13. The word-line controls the two transistors that connect the flip-flop to the read-write bus (BL and BLbar). In order to read out nondestructively the data stored in the cell, the read-write bus must be precharged. The inverter that is in the low-state discharges the corresponding bit-line. Writing data either confirms or changes the state of the flip-flop. When a change must happen, the high bit-line must overrule the corresponding low-state inverter, while the low bit-line simply discharges the output node of the high inverter. The second event takes less time than the first for the big differences between bus and inverter node capacitances. Bus capacitances are at least 10 times larger than cell node capacitances. The rapid discharge of the high inverter output node blocks the other half of the flip-flop before it has a chance to discharge the high bus. In order to prevent a read cycle from Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.51

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.51

FIGURE 14.4.12 A 4-digit, 7-segment multiplexed display.

becoming a write-cycle, it is important to equalize the voltages of the bit-lines prior to any attempt to read out. This is achieved by means of a single equalizing transistor on top of the read-write bus. This transistor shorts the two bit-lines for just long enough time to neutralize any bit-line voltage difference whatever the mean voltage may be. The pull-up circuitry precharging the read-write bus accommodates conflicting requirements. It must load the bitlines as fast as possible but not counteract the discharge of the bit-line, which is tied to the low data during read out. Usually, the pull-up circuitry is clocked and driven by the same clock that controls the equalizing transistor. Cell size and power consumption are the two key items that determine the performances and size of present memory chips which may count as much as several million transistors. Cell sizes have shrunken continuously until they are a few tens of microns square. Static power is minimized by using CMOS instead of resistively loaded inverters. Most of the power needed is to provide fast load and discharge cycles of the line capacitances. Therefore a distinction is generally made between standby and active conditions. The load-discharge processes imply short but very large currents. The design of memories (static as well as dynamic) has always been at the forefront of the most advanced technologies.

FIGURE 14.4.13 The basic six-transistor circuit of a static MOS memory cell.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.52

DIGITAL AND ANALOG SYSTEMS 14.52

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.14 The one-transistor MOS memory cell; (a) IC implementation; (b) equivalent circuit.

Dynamic Memories8,9 The trend toward ever larger memory chips has led to the introduction of single transistor memory cells. Here the word-line controls the gate of the MOS switch that connects the single bit line to a storage capacitor (Fig. 14.1.14). Charge is pumped from (or fed back to) the bit-line by properly choosing the voltage of the bit-line. The actual data consist of charge (or absence of charge) stored in the capacitor in the inversion layer below a field plate. Typical storage capacitances are 100 to 50 fF. The useful charge is quite small, around 0.1 or 0.2 pC. In order to read data, this charge is transferred to the bit-line capacitance, which is usually one order of magnitude larger. The actual voltage change of the bit-line is thus very small, typically 100 to 200 mV. In order to restore the correct binary data, amplification is required. The amplifier must fulfill a series of requirements: It must fit in the small space between cells, be sensitive and respond very rapidly, but also distinguish data from inevitable switching noise injected by the overlap capacitance of the transistor in series with the storage capacitor. To solve this problem, dynamic memories generally use two bit-line output lines instead of a single one. Read-out of a cell always occurs in the same time as the selection of a single identical dummy cell. This compensates switching noises as far as tracking of parasitic capacitances of both switching transistors is achieved. The key idea behind the detection amplifier is the metastable state of a flip-flop. The circuit is illustrated in Fig. 14.4.15. Before read-out, the common source terminal is left open. Hence, assuming the bit-line voltages are the same, the flip-flop behaves like two cross-coupled diode-connected transistors.

FIGURE 14.4.15 The readout amplifier of dynamic one-transistor-per-cell MOS memories consists of a flip-flop in the metastable state in order to enhance sensitivity and speed and to minimize silicon area.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.53

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.53

The common source voltage adjusts itself to the pinch-off voltage of the transistors. After read-out has occurred, a small voltage imbalance is experienced between bit-lines. The sign of this imbalance is determined by the actual data previously stored in the cell. If then the common source node voltage is somewhat reduced, one of the two transistors becomes slightly conducting while the other remains blocked. The conducting transistor slightly discharges the bit-line with the lowest voltage, increasing the voltage imbalance. The larger this difference, the faster the common source node can be lowered without risk to switch-on of the second transistor. After a short time, the imbalance is large enough to connect the common source to ground. The final state of the flip-flop reproduces the content of the selected cell. Notice that data read during the read-out cycle are now available for rewriting in the cell. Permitting the transistor in series with the storage capacitor to conduct longer performs automatic rewriting. Because storage is likely to be corrupted by leakage current, this same procedure is repeated regularly, even when data are not demanded, in order to keep the memory active. Dynamic memories are the densest circuits designed. Memory sizes currently attain 16 Mbits, and 256 Mbits memories are being developed. Commercially Available Memory Chips Memory arrays usually consist of several square arrays packed on a single chip. In order to access data, the user provides an address. The larger the memory size, the longer the address. In order to reduce pin count, address words are divided in fields controlling less pins in a sequential manner. Many memories are accessed by a row address strobe (RAS) followed by a column address strobe (CAS), each one requiring only half of the total number of address bits. Memories represent a very substantial part of the semiconductor market. Besides RAMs (random access memories), a large share of nonvolatile memories is available comprising ROMs (read-only memories), programmable read-only memories (PROMs), and electrically alterable readonly memories (EAROMs), which are considered below. RAM. Random access memories (RAMs) are either static or dynamic memory arrays like those described above. They are used to read and write binary data. Storage is warranted as long as the power supply remains on. ROM. In read-only memories, the information is frozen. Information can only be read out. Access is the same as in RAMs. Memory elements are single transistors. Whether a cell is conducting or not has been determined during the fabrication process. Some MOS transistors have a thin gate oxide layer; others have a thick oxide layer, or some are connected to the access lines, while others are not. PROM. The requirements for committing to a fixed particular memory content, inherent in the structure of ROMs, is a serious disadvantage in many applications. PROMs (programmable ROMs) allow the manufacturer or the user to program ROMs. Various principles exist:

• With mask programmable ROMs, a single mask is all the manufacturer needs to implement the client code. • With fuse link programmable ROMs, the actual content is written in the memory by blowing a number of microfuses. This allows the customer to program the ROM, but the final stage is irreversible as in mask programmable ROMs. EPROM. In electrically programmable ROMs the data are stored as shifts of the threshold voltages of the memory transistors. Floating-gate transistors are currently used for this purpose. Loading the gate occurs by hot electrons tunneling through the oxide from an avalanching junction toward the isolated gate. The charges trapped in the gate can only be removed by ultraviolet light, which provides the energy required to cross the oxide barrier. EEPROM. Electrically erasable PROMs use metal-nitride-oxide-silicon transistors in which the charges are trapped at the nitride oxide interface. The memory principle is based on Fowler-Nordheim tunneling to move charges from the substrate in the oxide or vice versa. The electric field is produced by a control gate. EAROMs (electrically alterable ROMs) use the same principle to charge or discharge a floating gate with some additional features providing better yields.

DIGITAL-TO-ANALOG CONVERTERS (D/A OR DAC)10 Converting data from the digital to the analog domain may be achieved in a variety of ways. One of the most obvious is to add electrical quantities such as voltages, currents or charges following a binary weighted scale. Circuits illustrating this principle follow. They are of two kinds: first, converters aiming at integral linearity and, second, converters exchanging integral linearity for differential linearity. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.54

DIGITAL AND ANALOG SYSTEMS 14.54

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.16 Integral linearity converters. Switching is implemented by (a) MOS transistors and (b) bipolar circuits.

Integral Linearity Converters A binary scale of voltages and currents is easily obtained by means of the well-known R-2R network. In the circuit shown in Fig. 14.4.16a, binary weighted currents flowing in the vertical resistances are either dumped to ground or injected into the summing node of an operational amplifier. The positions of the switches reproduce the code of the digital word to be converted. Practical switches are implemented by means of MOS transistors. Their widths double going from left to right to keep the voltage drops constant between drains and sources. The resulting constant offset voltage can be compensated easily. Another approach, better suited for bipolar circuits, is illustrated in Fig. 14.4.16b. Here the currents are injected into the emitters of transistors whose outputs feed bipolar current switches performing the digital code conversion. To keep all emitters at the same potential, small resistors are placed between bases of the transistors. They introduce voltage drops of 18 mV (UT.ln2) to compensate for the small base to emitter voltages changes resulting from the systematic halving of collector current going from left to right.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.55

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.55

FIGURE 14.4.17 Capacitive D/A converter with charge integration op-amp.

The accuracy of R-2R circuits is based on the matching of the resistances. It does not exceed 10 bits in practice, and may reach 12 bits when laser trimming is used. Switched capacitors are used also to perform D/A conversion. Capacitors are easy to integrate and offer superior temperature characteristics compared to thin-film resistors. However, they suffer from several drawbacks and require special care in order to provide good matching. Integrated capacitors are sandwiches of metal-oxide-silicon layers (MOS technology) or polysilicon-oxide-polysilicon (double poly technology). The latter exhibit a highly linear behavior and extremely small temperature coefficient (typically 10 to 20 ppm/°C). In order to improve geometrical tracking, any capacitor must be a combination of “unit” capacitors that represent the minimum-sized element available (typically 100 fF). MOS capacitors have relatively large stray capacitances to the substrate through their inversion layer, about 10 to 30 percent of their nominal value, depending on the oxide thickness. Doubly poly capacitors offer better figures, but their technology is more elaborate. Whatever choice is made, the circuits must be designed to be inherently insensitive to stray capacitance. This is the case for the circuit shown in Fig. 14.4.17. In this circuit, a change of the position of a switch results in a charge redistribution among a weighted capacitor and the feedback capacitor C0. The transferred charge is stored on the feedback capacitor so that the output voltage stays constant, provided leakage currents are small (typically 10–15 A). The stray capacitors are illustrated in Fig. 14.4.17. They have no influence on the converter accuracy. Indeed, Ca and Cc are loaded and discharged according to position changes of switch S3, but the currents through those capacitors do not flow through the summing junction. On the other hand, Cb as well as Cd is always discharged and therefore do not influence the accuracy. Another version of a capacitive D/A converter is shown in Fig. 14.4.18. In this network, changing the position of switches S3 and S′ produces a charge redistribution between the capacitor at the extreme left and the parallel combination of all other capacitors that represent a total of 8 C. Hence the upper node undergoes a voltage swing equal to Vref /2. A binary scale of voltages consequently may be produced according to the various switches. Switch S′ fixes the initial potential of the upper node and allows the discharge of all capacitors prior to conversion. There is no specific requirement that the upper node be connected to ground or to any fixed potential V0, provided Vout is evaluated against V0.

FIGURE 14.4.18 Capacitive D/A converter with unity-gain buffer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.56

DIGITAL AND ANALOG SYSTEMS 14.56

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.19 Dynamic element matching converter principle.

Notice that the stray capacitance Cb degrades the converter accuracy, while Ca has no effect at all. Careful layout therefore is required to minimize Cb, unless this stray capacitance is included in the evaluation of the capacitor C at the extreme right. For this reason, only small numbers of bits should be considered. The unavoidable exponential growth of capacitances with the number of bits is relieved to some extent because the capacitance area increases as the square of dimensions. Another widely used technique to perform D/A conversion is paralleling of many identical transistors controlled by a single base-to-emitter or gate-to-source voltage source. The output terminals are tied together in order to implement banks of binary weighted current sources. This requires 2(N+1) transistors to make an N-bit converter, which is very efficient for transistors are the smallest on-chip devices available. Usually all the transistors are placed in an array and controlled by a row and column decoder, as in memories. One of the interesting features of this type of converter is its inherent monotonicity. Furthermore, if access to the individual transistors follows a pseudorandom pattern, many imperfections resulting from processing, such as an oxide gradient in the MOS process, have counterbalancing effects. Hence, an accuracy of 10 bits may be achieved with transistors whose standard deviation is much worse. A converter11 with excellent absolute accuracy is illustrated in Fig. 14.4.19. It is based on high-accuracy divide-by-2 blocks like those shown in the right part of the illustration. Each block consists of a Widlar circuit with equal resistances in the emitters of transistors T1 and T2 in order to split the current I approximately into two equal currents: I1 and I2 (1 percent tolerance). A double-throw switch is placed in series with the Widlar circuit in order to alternate I1 and I2 at the rate of the clock f. Provided the half clock periods t1 and t2 can be made equal within 0.1 percent, a condition that can easily be met, one obtains two currents whose averages I3 and I4 represent I/2 with an accuracy approaching 10–5. This technique, known as the dynamic element matching method, has been successfully integrated in bipolar technology offering an accuracy of 14 bits without expensive element trimming. A band-gap current generator provides the reference current Iref, which is compared to the right-side output current of the first divide-by-2 block. A high current-gain amplifier closes the feedback loop controlling the base terminals of T1 and T2 of the first block. Segment Converters In some applications, accuracy does not imply absolute linearity but rather differential linearity, which is very critical because errors between two successive steps larger than a half LSB cannot be tolerated. The difficulty occurs when changes of MSBs occur. To overcome the problem, segment converters were designed. The idea is to divide the full conversion scale into segments. A D/A converter with a maximum of 10 bits is used within segments. Passing from one segment to another is achieved in a smooth manner as in the circuit illustrated by Fig. 14.4.20.12 In this circuit, the 10-bit segment converter is powered by one of the four bottom current sources. A 2-bit decoder,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.57

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.57

FIGURE 14.4.20 Segment D/A converter.

which is controlled by the two MSBs, makes the appropriate connections. When the two MSBs are 00, current source I1 is chosen and the three others are short-circuited to ground. When the MSB pattern changes to 01, the switches take the positions illustrated in the figure. Then I2 supplies the 10-bit converter while I1 is injected directly into the summing node of the op amp. Hence, no change occurs in the main current component. The differential linearity is entirely determined by the 10-bit converter. The price paid for this improvement is a degradation of absolute linearity. Small discrepancies among the four current sources introduce slope or gain changes between segments. The absolute linearity specifications thus do not reach the differential linearity specs of the converter. Voltage segment converters can also be integrated. Instead of parallel current sources, series voltage sources are required. This is achieved by means of a chain of equal resistors dividing the reference voltage into segments. The D/A converter is connected along the chain by means of two unity-gain buffers in order not to affect input voltages. Passing from one segment to the next implies that the buffers are interchanged in order to avoid the degradation of the differential linearity, which could result from different offsets in the buffers. An integrated 16-bit D/A converter has been built along these lines.13 When a switched-capacitor D/A converter is embedded in a voltage segment converter, no buffer amplifiers are needed because no dc current drain is required. This property simplifies greatly the design of moderate accuracy converters and has been extensively used in the design of integrated codecs. Codecs. Codecs are nonlinear converters used in telephony. To cover the wide dynamic range of speech satisfactorily with only 8 bits, steps of variable size must be considered: small steps for weak signals, and large steps when the upper limit of the dynamic range is approached. The four LSBs of a codec-coded word define a step size within a segment whose number is given by the three following bits. The polarity of the sample is given by the MSB. Going from one segment to the next implies doubling the step size in order to generate the nonlinear conversion law (mlaw). In the codec circuit of Fig. 14.4.21,14 bottom plates are connected either in the middle of the resistive divider considered as an ac ground (position 1), or to Vlow or Vhigh (position 2 plus switch S1 or S2) representing, respectively, the negative and positive reference voltage sources, or to a third position (position 3) connecting a single capacitor to an appropriate node of the resistive voltage divider. Hence, the resistive divider determines the step size (four LSBs), the capacitive divider defines the segment (three following bits), and switches S1 or S2 control the sign (MSB).

ANALOG-TO-DIGITAL (A/D) CONVERTERS (ADC)10 Some analog-to-digital converters are in fact D/A converters embedded in a negative feedback loop with appropriate logic (see Fig. 14.4.22). The analog voltage to be converted is sensed against the analog output of a D/A converter, which in turn is controlled by a logic block so as to minimize the difference sensed by the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.58

DIGITAL AND ANALOG SYSTEMS 14.58

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.21 Segment type codec.

comparator. Various strategies have been proposed for the logic block, depending on the type of D/A converter used. Most of the converters described previously have been implemented as ADCs in this manner. In particular, ladder and capacitive D/A converters are used in the so-called successive approximation D/A converter. In those devices, the logic block first sets the MSB; then the comparator determines whether the analog input voltage is larger or smaller than the half-reference voltage. Depending on the result, the next bit is introduced, leaving the MSB unchanged if the analog signal is larger and resetting the MSB in the opposite state. The algorithm is repeated until the LSB is reached. Successive approximation ADCs are moderately fast, for conversion time is strictly proportional to the number of bits and is determined by the settling time of the op-amp. The algorithm implies that the analog input voltage remains unaltered during the conversion time; otherwise it may fail. To avoid this, a sample-andhold (SH) circuit must be placed in front of the converter. A capacitive A/D converter is shown in Fig. 14.4.23.15 The conversion is accomplished by a sequence of three operations. During the first, the unknown analog input voltage is sampled. The corresponding positions of the switches are as illustrated in Fig. 14.4.23a. The total stored charge is proportional to the input voltage Vin. In the second step, switch SA is opened and the positions of all Si switches are changed (Fig. 14.4.23b). The bottom plates of all capacitors are grounded, and consequently the voltage at the input node of the comparator

FIGURE 14.4.22 A D/A converter within a feedback loop forms an A/D converter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.59

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.59

FIGURE 14.4.23 An integrated switched-capacitor A/D converter: (a) sample mode; (b) hold mode; and (c) redistribution mode.

equals –Vin. The third step is initiated when raising the bottom plate of the MSB capacitor from ground to the reference voltage Vref (Fig. 14.4.23c). This is done by changing again the position of the MSB switch and connecting Sb to Vref instead of Vin. The voltage at the input node of the comparator is thus increased by Vref /2, so that the comparator’s output is a logic 1 or 0, according to the sign of (Vref /2 – Vin). The circuit operates similarly to any successive approximations converter. That is, V is compared to (Vref / 2 + Vref /4) when the result of the previous operation is negative, or the MSB switch returns to its initial position and only the comparison with Vref /4 is considered. After having carried out the same test until the LSB is reached, the conversion is achieved and the digital output may be determined from the position of the switches. The voltage of the common capacitive node is approximately equal to, or smaller than, the smallest incremental step.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.60

DIGITAL AND ANALOG SYSTEMS 14.60

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.24 A typical MOS comparator for an A/D converter.

Since this is almost negligible, the stray capacitance of the upper common node is practically discharged. It thus has no effect on the overall accuracy of the converter. Hence the problem in the circuit of Fig. 14.4.18 does not occur. The name “charge redistribution A/D converter” was given to this circuit because the charge stored during the first sequence is redistributed among only those capacitors whose bottom plates are connected to Vref after the conversion algorithm is completed. The design of comparators able to discriminate steps well below the offset voltage of MOS amplifiers is a problem deserving careful attention. An illustration of an MOS comparator is shown in Fig. 14.4.24.15 The amplifier comprises three parts: a double inverter, a differential amplifier, and a latch. The input-output nodes of the first inverter may be short-circuited by means of transistor T0 to bias the gates and drains of the identical two first inverters at approximately half the supply voltage. This procedure occurs during the “sampling” phase of the converter and corresponds to the grounding of switch SA in Fig. 14.4.23. As stated above in the comments concerning the circuit of Fig. 14.4.17, there is no necessity to impose zero voltage on this node during sampling; any voltage is suitable as long as the impedance of the voltage source is kept small. This is what occurs when T0 is on, since the input impedance of the converter is then minimum because of the negative-feedback loop across the first stage. Once the double inverter is preset, T0 opens and the output voltage of the second inverter is stored across the capacitor C1, momentarily closing switch S1. Then the floating input node of the comparator senses the voltage change resulting from charge redistribution during a test cycle and reflects the amplified signal at the output of the second inverter. Switch S2 in turn samples the new output so that the differential amplifier sees only the difference between the two events. Any feed through noise from T0 is ignored since it only affects the common mode of the voltages sampled on C1 and C2. Hence, the overall offset voltage of this comparator is equal to the offset voltage of the differential amplifier divided by the gain of the double inverter input stage. Signals in the millivolt range can be sensed correctly regardless of the poor behavior of MOS transistors’ offset voltages. Another integrated converter widely accepted in the field of electrical measurements is the dual-ramp A/D converter.13 FIGURE 14.4.25 The dual-slope A/D converter The principle of this device is first to integrate the unknown principle. voltage Vx during a fixed duration t1 (see Fig. 14.4.25). Then the input signal Vx is replaced by a reference voltage Vref. Since the polarities of both signals are opposite, the integrator provides an output ramp with opposite slope. It takes a time t2 for the integrator output voltage to return from V0 to zero. Hence V0 = −

1 RC

t1

∫0

Vx dt

(5)

and V0 = (Vref /RC)t2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(6)

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.61

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.61

Consequently, if Vx is a constant Vx = –Vref t 2 /t1

(7)

the time t1 is determined by a counter. At the moment the counter overflows, switching of the input signal from Vx to Vref occurs. The counter is automatically reset, and a new counting sequence initiated. When the integrator output voltage has returned to zero, the comparator stops the counting sequence. Thus the actual count is a direct measure of t2. The dual-ramp converter has a number of interesting features. Its accuracy is not influenced by the value of the integration time constant RC; neither is it sensitive to the long-term stability of the clock-frequency generator. The comparator offset can easily be compensated by autozeroing techniques. The only signal that actually controls the accuracy of the A/D converter is the reference voltage Vref. Excellent thermal stability of Vref can be achieved by means of band-gap reference sources. Another interesting feature of the dual-ramp A/D converter is the fact that since Vx is integrated during a constant time t1, any spurious periodic signal with zero mean whose period is a submultiple of t1 is automatically canceled. Hence, by making t1 equal to an entire number of periods of the power supply one obtains excellent hum rejection.

DELTA-SIGMA CONVERTERS 10,20 Because the development of integrated systems is driven mainly by digital applications, mixed analog-digital circuits, especially converters, should be implementable equally well without loss of accuracy in digital technologies, even the most advanced ones. Short channels, however, do not just improve bandwidth, they also negatively affect other features, such as dynamic range and 1/f noise, because both supply voltage and gate area of the MOS transistors get even smaller. Therefore, a trade-off of speed and digital complexity for resolution in signal amplitude is needed. Delta-Sigma converters illustrate this trend. Their object is to abate the quantization noise to enhance the signal-to-noise ratio (SNR) of the output data (quantization noise is the difference between the continuous analog data and their discrete digital counterpart). When the number of steps of a converter increases the quantization noise decreases. Similarly, if we enhance the SNR the resolution increases. Delta-Sigma converters take advantage of oversampling and noise shaping techniques to improve the SNR. Oversampling refers to the Nyquist criterion, which requires a sampling frequency twice the baseband to keep the integrity of sampled signals. Oversampling does not improve the signal. Neither does it increase the quantization noise power, nor is it a way to spread the quantization noise power density over a large spectrum, thereby lessening its magnitude to keep the noise power unchanged. If we restrict the bandwidth of the oversampled signal to the signal baseband in compliance with the Nyquist criterion, the amount of noise power left in the baseband is divided automatically by the ratio of the actual sampling frequency over the Nyquist frequency. We improve thus the SNR and consequently increase the resolution. Noise shaping is the other essential feature of Delta-Sigma converters. It refers to the filtering step oversampled data must undergo. The purpose is to further decrease the amount of noise left in the baseband by shifting a large part of the remaining quantization noise outside the baseband. During the step, the SNR is substantially improved. Both A/D and D/A converters lend themselves to Delta-Sigma architectures. The same principles prevail, but their implementation differs. We consider both separately. A/D Delta-Sigma Converters16,26 In A/D converters, oversampling is done simply by increasing the sampling frequency of the analog input signal. Noise shaping is achieved by means of a nonlinear filter like the one illustrated in the upper part of Fig. 14.4.26. As stated already above, the goal is to shift most quantization noise out of the baseband. The noise shaper consists of a feedback loop similar to the linear circuit shown in the lower part. In the upper figure, the quantization noise is produced by the A/D converter. This converter is followed by a D/A converter, which is required because the difference between the continuous analog input signal and the discrete digital output signal delivered by the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.62

DIGITAL AND ANALOG SYSTEMS 14.62

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.26 Principle of sigma-delta converters.

A/D converter must be sensed in the analog domain. This difference is amplified and low-passed by means of the analog loop filter H. The idea is truly to minimize the signal fed to the amplifier as in the continuous circuit shown below. Consequently, the output signal becomes a replica of the input. This requires some caution, however, because matching continuous input data and discrete output data is not feasible. The situation is very different from the one illustrated in the lower part of Fig. 14.4.26 where the quantization noise is simulated by means of an independent analog continuous noise source. Clearly, if the loop gain is high, the signal outputted by the amplifier should be the sum of the input signal minus the noise in order to make ua look like xa. In the upper circuit, however, the amplifier senses steps whose magnitude is determined by the A/D and D/A quantizer. The signal delivered by the amplifier is an averaged image over time of the difference between the input and the discrete signal fed back. The latter tracks the input the best it can—thanks to the feedback loop. A look at the signals displayed in Fig. 14.4.27 confirms this statement. The figure represents output data of a third-order noise shaper with a 3-bit quantizer. The loop filter consists of three integrators in cascade. Their outputs are respectively illustrated in the three upper plots. The signal delivered by the third integrator is applied to the quantizer whose output is shown below. It is obvious that notwithstanding the poor resolution of the A/D converter, the noise shaper tracks the input sine wave pretty well by correcting its output data continuously. Once the bandwidth of the signal outputted by the noise shaper has been restricted to the baseband, we get a signal whose SNR may be very high. To illustrate this, consider Fig. 14.4.28, which shows a plot of the signal-to-noise improvement that can be obtained with the linear circuit shown in Fig. 14.4.26. Although the actual noise shaper differs from its analog counterpart because the quantization noise is correlated with the input and the quantizer is a nonlinear device, results are comparable. Large SNR figures, 60 or 80 dB and even better, are readily achievable. It is obvious that large OSRs are not the only way to get high SNRs; these can also be obtained by increasing the order of the loop filters (assuming of course stability is achieved, which is not always easy in a nonlinear system). Another interesting feature that stems from the above figure is that A/D and D/A converters with few bits resolution only do not impair the resolution of the Delta-Sigma converter given the large SNRs noise shapers can achieve. Even a single-bit quantizer, notwithstanding its large amount of quantization noise, suffices. In practice the above converters are parallel devices in order not to slow down needlessly the conversion rate. The A/D converter is a flash converter and the D/A converter a unit-elements converter. The accuracy of the A/D converter is not critical as it is a part of the forward branch and it does not control the overall accuracy like

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.63

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

FIGURE 14.4.27 quantizer.

Waveforms observed in a third-order noise shaper making use of a 3-bit

FIGURE 14.4.28 Plot of the quantization noise attenuation vs. the oversampling rate (OSR) and the loop filter order. The case n = 0 corresponds to oversampling without noise shaping.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

14.63

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.64

DIGITAL AND ANALOG SYSTEMS 14.64

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.29 The generic Delta-Sigma A/D converter.

in any feedback loops. The D/A accuracy is more critical because it is located in the feedback branch and impairs the overall accuracy since the feedback loop minimizes only the difference between the oversampled input signal and the signal delivered by the D/A converter. This puts, of course, a heavy toll on the D/A converter unless we use a single-bit architecture. In single-bit quantizers, the A/D converter resumes to a comparator and the D/A converter to a device whose output can take only two states. No possibility exists that steps of unequal heights are found as in multi-bit D/As. This explains why single-bit quantizers are preferred generally to multi-bit. Multi-bit quantizers, however, are not evaded; they offer better alternatives for large bandwidth converters where the sampling frequency can be bound by the technology and the OSRs may take large values. To improve the performances of the D/A converter, randomization of their unit elements is generally recommended. This spreads the impairments of the unit elements mismatch over a large bandwidth. Without randomization, the same impairments produce harmonic distortion, which is a lot more annoying than white noise in terms of SNRs. A generic A/D Delta-Sigma converter is shown in Fig. 14.4.29. Besides the noise shaper, two additional items are visible—an anti-alias filter at the input and a decimator at the output. The anti-alias filter is not specific to Delta-Sigma converters, but its implementation is much less demanding than in Nyquist converters. The reason therefore stems from the fact that the sampling frequency is much larger than the signal baseband so that a third- or second-order analog filter already is enough. At the output of the noise shaper, the decimator restricts the bandwidth to the baseband and lowers the sampling rate from oversampled to the Nyquist frequency. Since the signal inputted in the decimator is taken after the A/D converter, the decimator is in fact a digital filter. This is important for decimation is a demanding operation that cannot be done easily in the analog domain. The decimator is indeed supposed not only to restrict the bandwidth of the converted data but also to get rid of the large amounts of high-frequency noise lying outside the baseband. Therefore the decimator consists of two or three cascaded stages, an finite impulse response (FIR) filter and one or two accumulate-anddump filters. The FIR filter takes care of the steep low-pass frequency characteristic that fixes the baseband, while the accumulate-and-dump filters attenuate the high-frequency noise. An illustration of the input-output signals of a fourth-order decimator fed by the third-order noise shaper considered in Fig. 14.4.27 is shown in Fig. 14.4.30. The signal from the noise shaper is shown in Fig. 14.4.27. The output of the decimator is illustrated below together with continuous analog input sine wave. The down-sampling rate in the example is equal to 8 and the effective number of bits (ENOB) of the decimated signal reaches 9.6 bits. In practice, resolutions of 16, even 24 bits, are obtained. The resolution of Delta-Sigma converters is determined by the ENOB, which is derived from the relation linking quantization noise to number of bits: ENOB =

SNR dB − 1.8 6

(8)

The SNR is measured by comparing the power of a full-scale pure sine wave to the quantization noise power taking advantage of the fast Fourier transform (FFT) algorithm. The measured noise consists of two

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.65

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.65

FIGURE 14.4.30 (Above) The quantizer output signal of Fig. 14.4.28. (Below) The same after decimation (order 4) compared to the input signal.

components—the actual quantization noise, which is a measure of the resolution, and the noise caused by the converter impairments. The first does not represent a defect; the second is unwanted and inevitable. The magnitude of this extra noise generally increases very rapidly when the input signal gets large because more and more nonlinear distortion tends to enter the picture. In order to avoid the impact of distortion on the ENOB evaluation, the SNR is evaluated generally as follows. The magnitude of the input signal is varied from a very low level, for instance, 60 dB below full scale, until full scale and the measured SNRs are plotted versus the magnitude of the input signal. As long as distortion does not prevail, the SNR varies like the power of the input signal, but beyond this it departs from ideal. The figure that must be considered in the above equation is the SNR obtained after extrapolating the linear portion of the SNR plot until full scale. D/A Delta-Sigma Converters10,20 The principles underlying A / D converters can be transposed in D/A Delta-Sigma converters (Fig. 14.4.31). Oversampling and noise shaping are applied concurrently but in a different way since the input data are digital words and the output is an analog signal. An interpolation step is needed first in order to generate additional samples interleaved with the input data. The output of the interpolator is then noise shaped before being applied to a low-resolution D/A converter whose analog output is filtered by means of a low-order analog low-pass filter. The quantization noise is evaluated and fed back to the loop filter before closing the feedback loop of the noise shaper. One of the differences with respect to A/D Delta-Sigma converters is the manner quantization noise is measured. All that is needed is to split the words delivered by the noise shaper in two fields—an MSB field and an LSB field. The MSB field, which consists of a few bits, or even a single-bit, controls the D/A converter, while the LSB field closes the feedback loop after the digital loop filter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.66

DIGITAL AND ANALOG SYSTEMS 14.66

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.31 Block diagram of a Delta-Sigma D/A converter.

FIGURE 14.4.32 Interpolation (above) the principle (below) spectrum of a fourfold interpolated sine wave.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.67

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.67

FIGURE 14.4.33 Timing of signals in a D/A Delta-Sigma converter: (0) the 13-bit digital input sine wave, (1) after a first fourfold interpolation, (2) after the second interpolation, (3) the noise shaped 3-bit signal, and (4) the analog output signal.

Interpolators are the counterpart of decimators. Instead of down-sampling they add data between samples as shown in the upper part of Fig. 14.4.32. In the figure, three zeros are placed between every sample. This multiplies already the sampling frequency by 4 but does not suffice because the spectrum of the resulting oversampled signal is like the one shown in Fig. 14.4.32b. One must erase part of the spectrum and multiply the signal by 4 in order to get the correct spectrum shown under Fig. 14.4.32c. This is done by means of a digital filter with a sharp cutoff frequency at the edge of the baseband. In practice this filter consists of several cascaded filters like in the decimator. An FIR filter takes care of the sharp cutoff frequency, whereas one or two low-pass filters perform the rest. The FFT of a sine wave after a fourfold interpolator with a single FIR filter is shown in the lower part of Fig. 14.4.32. The fundamental ray that is reproduced three times lies approximately 80 dB below the quantization noise floor and does no harm. A second more elaborate example is shown in Fig. 14.4.33. It illustrates the changes signals undergo versus time. The input is a 13-bit sine wave represented by large black dots. After a first interpolation, which multiplies the sampling frequency by 4, we get the data marked (1). A second interpolation by 4 yields data (2). The signal is then noise shaped. The three MSBs control the D/A converter whose analog output (3) consists of large steps that approximate the signal from the second interpolator. When the signal outputted by the D/A converter is filtered by means of a third-order low-pass, the continuous sine wave (4) is obtained, which is the actual output signal of the converter. With an SNR of nearly 80 dB, the 13-bit accuracy of the digital input signal is still met.

VIDEO A/D CONVERTERS10 Video applications require extremely short conversion times. Few of the previous converters meet the required speed. An obvious solution is full parallel conversion with as many comparators and reference levels as there are quantization steps. Fortunately, most video applications do not require accuracies higher than 8 bits, so 256 identical channels are sufficient. Only integrated circuits offer an economical solution in this respect.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.68

DIGITAL AND ANALOG SYSTEMS 14.68

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.34 A flash converter.

A typical parallel video converter, called a flash converter, is shown in Fig. 14.4.34.17 The architecture of the converter is simple. A string of 255 identical thin-film aluminum or polysilicon resistors is used in order to produce 28 reference levels. The input voltage V produces at the outputs of the comparators a vector which may be divided into two fields of 1s and 0s. An additional layer of logic transforms this vector into a new vector with 0s everywhere except at the boundary between the two fields. The new vector is further decoded to produce an 8-bit coded output. No sample-and-hold (SH) circuit is required since all comparators are synchronously driven by the same clock, but extremely fast comparators are required. An 8-bit resolution and a 5-MHz bandwidth imply binary decisions in a time as short as 250 ps. This is achieved by means of ECL logic drivers and storage flipflops. Signal propagation across the chip also requires careful layout. The resistive divider impedance may not exceed 50 to 100 Ω. Another particular feature is the inevitable high-input capacitance, which results from the paralleling of many comparators. The huge area and power consumption inherent to flash devices has stimulated the design of less greedy converters. Although these cannot compete with flash converters as far as speed is concerned, they offer conversion times short enough to comply with the requirements of a number of “fast” applications. Their principle comes to fragment the conversion process into several cycles, eventually two, during which strings of bits are evaluated by means of a subconverter. The process starts with the bits representing the MSB field. The other sets are evaluated one after another until the correct output word can be reconfigured by concatenating the individual segment codes. Each conversion step requires a single clock. This means, of course, that all subconverters must be fast devices, like flash converters. A 9-bit converter, for instance, operating by means of 3- bit segments requires three cycles for full conversion. Although only three clock cycles are required, the real conversion time is always much longer than three times what is needed for the 9-bit flash converter. Segmentation supposes indeed that some kind of analog memory be used in order to store intermediate results. Op-amps are thus required, which is a drawback with respect to flash converters, for the latter don’t need op-amps, and consequently ignore dominant poles. The most stringent difference between flash converters and segmented A/D converters is the number of comparators. In a flash converter, the number of comparators increases exponentially with the number of bits.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.69

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.69

FIGURE 14.4.35 A subranging converter.

In a segmented converter, less comparators are required. Not only is the area much smaller, but the power consumption is drastically reduced. Figure 14.4.35 shows the implementation of a two-cycle segmented A/D converter called a subranging converter. The coarse A/D converter evaluates first the M1 MSBs. For the remaining M2 LSBs, an analog replica of the MSBs is subtracted from the analog input signal. This is done by means of the M1-bit wide D/A converter whose output is subtracted from the input signal. The difference, which is nothing but coarse conversion quantization noise, is fed to the fine A/D flash converter, which evaluates the LSBs. The output code word is obtained by concatenation of the M1 and M2 segment codes. The total number of comparators in this type of converter is drastically reduced. For instance, in a 10-bit converter taking advantage of two 5-bit segments, each flash converter requires 31 comparators, that is, a total of 62 comparators, which is very little compared to the 1023 comparators required by a true 10-bit flash converter. Naturally the coarse D/A converter, which is a unit-elements parallel device, and the sample-and-hold, which holds the input while A/D and D/A conversions take place, must be brought into the picture also. In any case, segmented converters save a lot of area and power. The number of comparators can be decreased further if we consider the recycling converter 24 shown in Fig. 14.4.36. In this circuit the difference between the input and its coarse quantized approximation is fed back to the input, stored in the sample-and-hold device, and recycled through the A/D converter. Eventually a second cycle may be envisaged. Each cycle, a new set of lower rank bits is generated. The fine flash converter is not needed, but an interstage amplifier is required. Its purpose is to amplify the quantization noise so that it spans exactly over the full dynamic range of the A/D and D/A converters. If this were not the case, we would be forced to adapt continuously the resolution of the converters to cope with the decreasing magnitude of the difference signal.

FIGURE 14.4.36 A recycling converter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.70

DIGITAL AND ANALOG SYSTEMS 14.70

PULSED CIRCUITS AND WAVEFORM GENERATION

FIGURE 14.4.37 A pipelined converter.

Of course, it is useless to try to repeat this procedure over and over again because the errors generated in every new cycle pile up and corrupt the difference signal. It is very important to determine which sources of errors are important. The main ones are the interstage gain error, the D/A and the A / D nonlinearities. The A/D errors can easily be corrected by extending the dynamic range of both converters. Errors from the flash converter are corrected automatically during recycling. The other errors are more difficult to correct but one should not overlook the fact that they affect only bits generated after the first cycle. The conversion time of recycling converters varies, of course, like the number of cycles. Pipelined converters offer a good means to keep the conversion time equal to a single cycle time, however, at the expense of area. Such a converter is shown in Fig. 14.4.37. The idea is simply to exchange time for space. In other words, we cascade blocks identical to the circuit of Fig. 14.4.36, but each circuit feeds its neighbor instead of recycling its own data. Every bloc thus deals with data that belong to different time samples. In order to reconfigure the correct output words, one must reshuffle the code segments in order to recover time consistency. This is done by means of registers. Recycling and pipelined converters that operate with segments only 1-bit wide are currently designated algorithmic converters. They are not fast devices since they require as many cycles as number of bits to

FIGURE 14.4.38 A typical integrated wave generator that can deliver a square wave, a triangular wave, and a sine wave.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_14.qxd

10/28/04

11:38 AM

Page 14.71

DIGITAL AND ANALOG SYSTEMS DIGITAL AND ANALOG SYSTEMS

14.71

output code words. Algorithmic converters are similar to successive approximation converters although they operate differently.

FUNCTION GENERATORS Integrated function generators consist generally of a free-running relaxation oscillator controlling a nonlinear shaping circuit. A typical block diagram of a function generator is shown in Fig. 14.4.38. The relaxation oscillator is a combination of a time-base generator and a Schmitt trigger. The time base in the present case is obtained by successive loading and discharging of a capacitor using two current sources. One is a constant current source I1 and the other a controlled current source delivering a current step equal to –2I1 or zero. Hence, the voltage across the capacitor C is a sawtooth. The switching of the controlled current source is monitored by the logical output signal of the Schmitt trigger. This last circuit is in fact a precision Schmitt trigger. The oscillating voltage across C is obtained in the same manner. The output sawtooth signal is buffered and drives a network consisting of resistors and diodes which changes the sawtooth into a more or less sinusoidal voltage. The advantage of function generators over RC or op-amp oscillators is their excellent amplitude stability versus frequency. Also frequency modulation can easily be achieved by changing the current delivered by the two current sources. This type of function generator can be frequency-swept over a wide dynamic range without spurious amplitude transients since no selective network is involved.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 15

MEASUREMENT SYSTEMS Measurement circuits are critical in the analysis, design, and maintenance of electronic systems. For those who work in electronics, these circuits and systems become the eyes into the world of the electron that cannot be directly seen. The main objective of such systems is to not influence what is being measured or observed. This is accomplished by a range of types of measurement circuits. All of these are considered in this section. A key element of systems that measure is how the measurement is actually made. Without an understanding of how the measurement is made, one cannot understand the limitations. It is possible to make a measurement and be off by several orders of magnitude. We look at the process of making measurements and what to look for so that one can have a level of confidence in the measurement. Substitution and analog measurements have been an important mainstay of this field. Unlike measurements that involve some digital systems, the accuracy and precision of the measurements depend totally on the precision of elements used in the measurement systems. We look at a variety of measurement techniques using substitution and look at analog devices like ohmmeters. Digital instruments have many advantages especially when used in data acquisition systems. An important component of measurement systems is the transducer. The transducer converts a physical quantity into an electrical signal that can be measured by a measurement system. Knowing the characteristics, especially its limitations, helps in understanding the precision with which measurements can be made. It is in the area of transducers and sensors that we have seen some of the most dramatic advances. Bridge circuits gave us the first opportunity to actually make measurements without “loading” the circuit being measured. The accuracy of the measurements merely depended on the precision of the elements used in the bridge. We need AC impedance measurements to develop the small signal characteristics of a circuit or system and to evaluate the stability of the circuit or system. These kinds of measurements are important in a variety of applications from the space station to how well your television works. C.A.

In This Section: CHAPTER 15.1 PRINCIPLES OF MEASUREMENT CIRCUITS DEFINITIONS AND PRINCIPLES OF MEASUREMENT TRANSDUCERS, INSTRUMENTS, AND INDICATORS MEASUREMENT CIRCUITS

15.3 15.3 15.5 15.5

CHAPTER 15.2 SUBSTITUTION AND ANALOG MEASUREMENTS VOLTAGE SUBSTITUTION DIVIDER CIRCUITS DECADE BOXES ANALOG MEASUREMENTS DIGITAL INSTRUMENTS

15.6 15.6 15.8 15.9 15.11 15.14

CHAPTER 15.3 TRANSDUCER-INPUT MEASUREMENT SYSTEMS TRANSDUCER SIGNAL CIRCUITS

15.18 15.18

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.2

MEASUREMENT SYSTEMS

CHAPTER 15.4 BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS PRINCIPLES OF BRIDGE MEASUREMENTS RESISTANCE BRIDGES INDUCTANCE BRIDGES CAPACITANCE BRIDGES FACTORS AFFECTING ACCURACY BRIDGE DETECTORS AND AMPLIFIERS MISCELLANEOUS MEASUREMENT CIRCUITS

15.25 15.25 15.27 15.30 15.32 15.33 15.35 15.35

CHAPTER 15.5 AC IMPEDANCE MEASUREMENT

15.43

Section Bibliography: Andrew, W. G., “Applied Instrumentation in the Process Industries,” Gulf Pub. Co., 1993. Bell, D. A., “Electronic Instrumentation and Measurements,” Prentice Hall, 1994. Carr, J. J., “Elements of Electronic Instrumentation and Measurement,” 3rd ed., Prentice Hall, 1996. Considine, D. M., and S. D. Ross (eds.), “Handbook of Applied Instrumentation,” Krieger, 1982. Coombs, C. F., Jr., “Electronic Instrument Handbook,” 2nd ed., McGraw-Hill, 1995. Decker, T., and R. Temple, “Choosing a phase noise measurement technique,” H-P RF and Microwave Measurement Symposium, 1989. Erickson, C., Switches in Automated Test Systems, Chap. 41 in Coombs’s “Electronic Instrument Handbook,” 2nd ed., McGraw-Hill, 1995. “Direct Current Comparator Potentiometer Manual, Model 9930,” Guideline Instruments, April 1975. Harris, F. K., “Electrical Measurements,” Wiley, 1952. IEEE Standard Digital Interface for Programmable Instrumentation, ANSI/IEEE Std. 488.1, 1987. Keithley, J. R., J. R. Yeager, and R. J. Erdman, “Low Level Measurements,” 3rd ed., Keithley Instruments, June 1984. Manassewitsch, V., “Frequency Synthesizers: Theory and Design,” Wiley, 1980. McGillivary, J. M., Computer-Controlled Instrument Systems, Chap. 43 in Coombs’s “Electronic Instrument Handbook,” 2nd ed., McGraw-Hill, 1995. Mueller, J. E., Microprocessors in Electronic Instruments, Chap. 10 in Coombs’s “Electronic Instrument Handbook,” 2nd ed., McGraw-Hill, 1995. Nachtigal, C. L., “Instrumentation and Control: Fundamentals and Applications,” Wiley, 1990. “Operation and Service Manual for Model 4191A RF Impedance Analyzer,” Hewlett-Packard, January 1982. “Operation and Service Manual for Model 4342A Q Meter,” Hewlett-Packard, March 1983. Reissland, M. V., “Electrical Measurement: Fundamentals, Concepts, Applications,” Wiley, 1989. Santoni, A., “IEEE-488 Instruments,” EDN, pp. 77–94, October 21, 1981. Schoenwetter, H. K., “A high-speed low-noise 18-bit digital to analog converter,” IEEE Trans. Instrum. Meas., Vol. IM-27, No. 4, pp. 413–417, December, 1978. Souders, R. M., “A bridge circuit for the dynamic characteristics of sample/hold amplifiers,” IEEE Trans. Instrum. Meas., Vol. IM-27, No. 4, December, 1978. Walston, J. A., and J. R. Miller (eds.),“Transistor Circuit Design,” McGraw-Hill, 1963. Witte, R. A., “Electronic Test Instruments: Theory and Practice,” Prentice Hall, 1993. Workman, D. R., “Calibration status: a key element of measurement systems management,” 1993 National Conference of Standards Laboratories Symposium.

15.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 15.1

PRINCIPLES OF MEASUREMENT CIRCUITS Francis T. Thompson*

DEFINITIONS AND PRINCIPLES OF MEASUREMENT Precision is a measure of the spread of repeated determinations of a particular quantity. Precision depends on the resolution of the measurement means and variations in the measured value caused by instabilities in the measurement system. A measurement system may provide precise readings, all of which are inaccurate because of an error in calibration or a defect in the system. Accuracy is a statement of the limits that bound the departure of a measured value from the true value. Accuracy includes the imprecision of the measurement along with all the accumulated errors in the measurement chain extending from the basic reference standards to the measurement in question. Errors may be classified into two categories, systematic and random. Systematic errors are those which consistently recur when a number of measurements are taken. Systematic errors may be caused by deterioration of the measurement system (weakened magnetic field, change in a reference resistance value), alteration of the measured value by the addition or extraction of energy from the element being measured, response-time effects, and attenuation or distortion of the measurement signal. Random errors are accidental, tend to follow the laws of chance, and do not exhibit a consistent magnitude or sign. Noise and environmental factors normally produce random errors but may also contribute to systematic errors. The arithmetic average of a number of observations should be used to minimize the effect of random errors. The arithmetic average or mean X of a set of n readings X1, X2, . . . , Xn is X = ∑Xi /n The dispersion of these reading about the mean is generally described in terms of the standard deviation s, which can be estimated for n observations by s=

∑( X i − X )2 n −1

where s approaches s as n becomes large.

*The author is indebted to I. A. Whyte, L. C. Vercellotti, T. H. Putnam, T. M. Heinrich, T. I. Pattantyus, and R. A. Mathias for suggestions and constructive criticism for Chap. 15.1.

15.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.4

PRINCIPLES OF MEASUREMENT CIRCUITS 15.4

MEASUREMENT SYSTEMS

A confidence interval can be determined within which a specified fraction of all observed values may be expected to lie. The confidence level is the probability of a randomly selected reading falling within this interval. Detailed information on measurement errors is given in Coombs (Chap. 15.5). Standardization and calibration involve the comparison of a physical measurement with a reference standard. Calibration normally refers to the determination of the accuracy and linearity of a measuring system at a number of points, while standardization involves the adjustment of a parameter of the measurement system so that the reading at one specific value is in correspondence with a reference standard. The numerical value of any reference standard should be capable of being traced through a chain of measurements to a National Reference Standard maintained by the National Institute of Standards and Technology (formerly the National Bureau of Standards). The range of a measurement system refers to the values of the input variable over which the system is designed to provide satisfactory measurements. The range of an instrument used for a measurement should be chosen so that the reading is large enough to provide the desired precision. An instrument having a linear scale, which can be read within 1 percent at full scale, can be read only within 2 percent at half scale. The resolution of a measuring system is defined as the smallest increment of the measured quantity which can be distinguished. The resolution of an indicating instrument depends on the deflection per unit input. Instruments having a square-law scale provide twice the resolution of full scale as linear-scale instruments. Amplification and zero suppression can be used to expand the deflection in the region of interest and thereby increase the resolution. The resolution is ultimately limited by the magnitude of the signal that can be discriminated from the noise background. Noise may be defined as any signal that does not convey useful information. Noise is introduced in measurement systems by mechanical coupling, electrostatic fields, and magnetic fields. The coupling of external noise can be reduced by vibration isolation, electrostatic shielding, and electromagnetic shielding. Electrical noise is often present at the power-line frequency and its harmonics, as well as at radio frequencies. In systems containing amplification, the noise introduced in low-level stages is most detrimental because the noise components within the amplifier passband will be amplified along with the signal. The noise in the output determines the lower limit of the signal that can be observed. Even if external noise is minimized by shielding, filtering, and isolation, noise will be introduced by random disturbances within the system caused by such mechanisms as the Brownian motion in mechanical systems, Johnson noise in electrical resistance, and the Barkhausen effect in magnetic elements. Johnson noise is generated by electron thermal agitation in the resistance of a circuit. The equivalent rms noise voltage developed across a resistor R at an absolute temperature T is equal to 4kTR ∆f , where k is Boltzmann’s constant (1.38 × 10–23 J/K) and ∆ f is the bandwidth in hertz over which the noise is observed. The bandwidth ∆ f of a system is the difference between the upper and lower frequencies passed by the system (Chap. 15.2). The bandwidth determines the ability of the system to follow variations in the quantity being measured. The lower frequency is zero for dc systems, and their response time is approximately equal to 1/(3∆ f ). Although a wider bandwidth improves the response time, it makes the system more susceptible to interference from noise. Environmental factors that influence the accuracy of a measurement system include temperature, humidity, magnetic and electrostatic influences, mechanical stability, shock, vibration, and position. Temperature changes can alter the value of resistance and capacitance, produce thermally generated emfs, cause variations in the dimensions of mechanical members, and alter the properties of matter. Humidity affects resistance values and the dimensions of some organic materials. DC magnetic and electrostatic fields can produce an offset in instruments which are sensitive to these fields, while ac fields can introduce noise. The lack of mechanical stability can alter instrument reference values and produce spurious responses. Mechanical energy imparted to the system in the form of shock or vibration can cause measurement errors and, if severe enough, can result in permanent damage. The position of an instrument can affect the measurements because of the influence of magnetic, electrostatic, or gravitational fields.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.5

PRINCIPLES OF MEASUREMENT CIRCUITS PRINCIPLES OF MEASUREMENT CIRCUITS

15.5

TRANSDUCERS, INSTRUMENTS, AND INDICATORS Transducers are used to respond to the state of a quantity to be measured and to convert this state into a convenient electrical or mechanical quantity. Transducers can be classified according to the variable to be measured. Variable classifications include mechanical, thermal, physical, chemical, nuclear-radiation, electromagneticradiation, electrical, and magnetic, as detailed in Sec. 8. Instruments can be classified according to whether their output means is analog or digital. Analog instruments include the d’Arsonval (moving-coil) galvanometer, dynamometer instrument, moving-iron instrument, electrostatic voltmeter, galvanometer oscillograph, cathode-ray oscilloscope, and potentiometric recorders. Digital-indicator instruments provide a numerical readout of the quantity being measured and have the advantage of allowing unskilled people to make rapid and accurate readings. Indicators

are used to communicate output information from the measurement system to the observer.

MEASUREMENT CIRCUITS Substitution circuits are used in the comparison of the value of an unknown electrical quantity with a reference voltage, current, resistance, inductance, or capacitance. Various potentiometer circuits are used for voltage substitution, and divider circuits are used for voltage, current, and impedance comparison. A number of these circuits and the reference components used in them are described in Chap. 15.2. Analog circuits are used to embody mathematical relationships, which permit the value of an unknown electrical quantity to be determined by measuring related electrical quantities. Analog-measurement techniques are discussed in Chap. 15.2, and a number of special-purpose measurement circuits are described in Chap. 15.4. Digital instruments combine analog circuits with digital processing to provide a convenient means of making rapid and accurate measurements. Digital instruments are described in Chaps. 15.2 and 15.4. Digital processing using the computational power of microprocessors is discussed in Chap. 15.3. Bridge circuits provide a convenient and accurate method of determining the value of an unknown impedance in terms of other impedances of known value. The circuits of a number of impedance bridges and amplifiers and detectors used for bridge measurements are described in Chap. 15.4. Transducer amplifying and stabilizing circuits are used in conjunction with measurement transducers to provide an electric signal of adequate amplitude, which is suitable for use in measurement and control systems. These circuits, which often have severe linearity, drift, and gain-stability requirements, are described in Chap. 15.3.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.6

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 15.2

SUBSTITUTION AND ANALOG MEASUREMENTS Francis T. Thompson

VOLTAGE SUBSTITUTION The constant-current potentiometer, which is used for the precise measurement of unknown voltages below 1.5 V, is shown schematically in Fig. 15.2.1. For a constant current, the output voltage Vo is proportional to the resistance included between the sliding contacts. In this circuit all the current-carrying connections can be soldered, thereby minimizing contact-resistance errors. When the sliding contacts are adjusted to produce a null, Vo is equal to the unknown emf and no current flows in the sliding contacts. At null, no current is drawn from the unknown emf, and therefore the measured voltage is independent of the internal resistance of the source. The circuit of a multirange commercial potentiometer is shown in Fig. 15.2.2. The instrument is standardized with the range switch in the highest range position as shown and switch S connected to the standard cell. The calibrated standard-cell dial is adjusted to correspond to the known voltage of the standard cell, and the standardizing resistance is adjusted to obtain a null on the galvanometer. This procedure establishes a constant current of 20 mA through the potentiometer. The unknown emf is connected to the emf terminals, and switch S is thrown to the emf position. The unknown emf can be read to at least five significant figures by adjusting the tap slider and the 11-turn 5.5-Ω potentiometer for a null on the galvanometer. The range switch reduces the potentiometer current to 2 or 0.2 mA for the 0.1 and the 0.01 ranges, respectively, thereby permitting lower voltages to be measured accurately. Since the range switch does not alter the battery current (22 mA), the instrument remains standardized on the lower ranges. When making measurements, the current should be checked using the standard cell to ensure that the current has not drifted from the standardized value. The Leeds and Northrup Model 7556-B six-dial potentiometer operates in a similar manner and provides an accuracy of ±(0.001 percent of reading + 0.1 mV). The constant-resistance potentiometer of Fig. 15.2.3 uses a variable current through a fixed resistance to generate a voltage for obtaining a null with the unknown emf. The constant-resistance potentiometer is used primarily for measurements in the millivolt and microvolt range. The microvolt potentiometer, or low-range potentiometer, is designed to minimize the effect of contact resistance and thermal emfs. Thermal shielding is used to minimize temperature differences. The galvanometer is connected to the circuit through a special Wenner thermo-free reversing key of copper and gold construction to eliminate thermal effects in the galvanometer circuit. A typical microvolt potentiometer circuit consisting of two constant-current decades and a constant-resistance element is shown in Fig. 15.2.4. The constant-current decades use Diesselhorst rings, in which the constant current entering and leaving the ring divides two paths. The IR drop across the resistance in the isothermal shield increases in 10 equal increments as the dial switch is rotated. The switch contacts are in the constant-current 15.6 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.7

SUBSTITUTION AND ANALOG MEASUREMENTS SUBSTITUTION AND ANALOG MEASUREMENTS

15.7

supply circuit, and therefore the effects of their IR drops and thermal emfs are minimized. A 100-division milliameter associated with the constant-resistance element provides nearly three additional decades of resolution. Readings to 10 nV are possible with this type of potentiometer. The direct-current comparator potentiometer used for precise measurement of unknown voltages below 2.1 V is shown in Fig. 15.2.5. Feedback from the zero-flux detector winding is used to adjust the current supply for ampere-turn balance between the primary and secondary windings as in the dc-comparator ratio bridge of Chap. 15.4. Standardization on the 1X range is obtained in a two-step FIGURE 15.2.1 Constant-current potentiometer. procedure. First, the external standard cell and galvanometer are connected in series across resistor EH by the selector switch, and the constant current source is adjusted for zero galvanometer deflection. This transfers the standard cell voltage across resistor EH. Second, the seven measuring dials are set to the known standard voltage, and the selector switch is used to connect the galvanometer across the opposing voltages AD and EH. Trimmer turns ns are used to obtain zero galvanometer deflection. This results in the standard cell voltage being generated across resistor AD with the dials set to the known standard cell value, and therefore calibration of the 1X range. The unknown emf is measured on the 1X range by using the selector switch to connect the unknown emf and the voltage generated across resistor AD in series opposition across the galvanometer. The seven

FIGURE 15.2.2 K2 potentiometer. (Leeds and Northrup)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.8

SUBSTITUTION AND ANALOG MEASUREMENTS 15.8

MEASUREMENT SYSTEMS

measuring dials are adjusted to obtain zero deflection. Specified measurement accuracies are ±(0.5 ppm of reading + 0.05 mV) on the 1X range (2.111111 V full scale), ±(1 ppm of reading + 0.01 mV) on the 0.1X range, and ±(2 ppm of reading + 0.005 mV) on the 0.01X range.

DIVIDER CIRCUITS The volt box (Fig. 15.2.6) is used to extend the voltage range of a potentiometer. The unknown voltage is connected between 0 and an appropriate terminal, for example, ×100. The potentiometer is connected between the 0 and P output terminals. When the potentiometer is balanced, it draws no current, and therefore the current drawn from the source flows through the resistor between terminals 0 and P. The unknown voltage is equal to the potentiometer reading multiplied by the selected tap multiplier. Unlike the potentiometer, the volt box does load the voltage source. Typical resistances range from about 200 to 1000 Ω/V. The higher resistance values minimize self-heating and do not load the source as heavily. Errors due to leakage currents, which could flow through the insulators supporting the resistors, are minimized by using a guard circuit (see Chap. 15.4). Decade voltage dividers provide a wide range of precisely defined, and very accurate voltage ratios. The Kelvin-Varley vernier decade circuit is shown in Fig. 15.2.7. The slide arms FIGURE 15.2.3 Constant-resistance potentiometer. in the first three decades are arranged so that they always span two contacts. The shunting effect of the second gang resistance across the slide arms of the first decade is equal to 2R, thereby giving a net resistance of R between

FIGURE 15.2.4 Microvolt potentiometer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.9

SUBSTITUTION AND ANALOG MEASUREMENTS SUBSTITUTION AND ANALOG MEASUREMENTS

15.9

FIGURE 15.2.5 Direct-current comparator potentiometer.

the slide-arm contacts. With no current drawn from the output, the resistance loading on the input is equal to 10R and is independent of the slide-arm settings. In each of the first three decades, 11 resistors are used, while only 10 resistors are used in the final decade, which has a single sliding contact. Potentiometers with six decades have been constructed using the Kelvin-Varley circuit.

DECADE BOXES Decade resistor boxes contain an assembly of resistances and switches, as shown in Fig. 15.2.8. The power rating of each resistance step is approximately constant; therefore, each decade has a different maximum current rating, which should not be exceeded. Boxes having four to seven decades are available with accuracies of 0.02 percent. Two typical seven-decade boxes provide resistance values from 0 to 1,111,111 Ω in 0.1-Ω steps and values from 0 to 11,111,110 Ω in 1-Ω steps. The accuracy at higher frequencies is affected by skin effect, series inductance, and shunt capacitance. The equivalent circuit of a resistance decade is shown in Fig. 15.2.9, where ∆L is the undesired incremental inductance added with each resistance step ∆R. Silver contacts are used to obtain a zero resistance Ro, as low as 1 mΩ/decade at dc. Zero inductance values Lo as low as 0.1 mH/decade are obtainable. The shunt capacitance for the configuration of Fig. 15.2.8 is a function of the highest decade in use, i.e., not set at zero. The shunt capacitance with the low terminal connected to the shield is typically 10 to 15 pF for the highest decade in use plus an equal value for each higher decade not in use. FIGURE 15.2.6 Volt-box circuit. Some applications, e.g., the determination of small inductances at audio frequency and the determination of resistance at radio frequency by the substitution method, require that the equivalent series inductance of the resistance box remain constant, independent of the resistance setting. In the inductively compensated decade resistance box small copper-wound coils each having an inductance equal to the inductance of an individual resistance unit are selected by the decade switch so as to maintain a constant total inductance. Decade capacitor units generally consist of four capacitances which are selectively connected in parallel by a four-gang 11-position switch (Fig. 15.2.10). The individual capacitors and their associated switch are shielded to

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.10

SUBSTITUTION AND ANALOG MEASUREMENTS 15.10

MEASUREMENT SYSTEMS

FIGURE 15.2.7 Decade voltage divider.

ensure that the selected steps add properly. Decade capacitor boxes are available with six-decade resolution, which provides a range of 0 to 1.11111 mF in increments of 1 pF and with an accuracy of 0.05 percent. Air capacitors are used in the 1- and 10-pF decades, and silver-mica capacitors in the higher ranges. Polystyrene capacitors are used in some less-precise decade capacitors. Decade inductance units can be constructed using four series-connected inductances of relative units 1, 2, 3, 4 or 1, 2, 2, 5. A four-gang 11-position switch is used to short-circuit the undesired inductances. Care must be

FIGURE 15.2.8 Decade resistance box.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.11

SUBSTITUTION AND ANALOG MEASUREMENTS SUBSTITUTION AND ANALOG MEASUREMENTS

FIGURE 15.2.9 Equivalent circuit of a resistance decade. (General Radio Co.)

15.11

FIGURE 15.2.10 Capacitor decade.

taken to avoid mutual coupling between the inductances. Decade inductance boxes are available with individual decades ranging from 1 mH to 10 H total inductance. A commercial single-decade unit consists of an assembly of four inductors wound on molybdenum-Permalloy dust cores and a switch which enables consecutive values to be selected. Typical units have an accuracy of 1 percent at zero frequency. The effective series inductance of a typical decade unit increases with frequency. The inductance is also a function of the ac current and any dc bias current. The Q of the coils varies with frequency.

ANALOG MEASUREMENTS Ohmmeter circuits provide a convenient means of obtaining an approximate measurement of resistance. The basic series-type ohmmeter circuit of Fig. 15.2.11a consists of an emf source, series resistor R1 and d’Arsonval milliammeter. Resistor R2 is used to compensate for changes in battery emf and is adjusted to provide full-scale meter deflection (0-Ω indication) with terminals X1 and X2 short-circuited. No deflection (infinite resistance indication) is obtained with X1 and X2 open-circuited. When an unknown resistor Rx is connected

FIGURE 15.2.11 Series-type ohmmeters: (a) basic circuit; (b) commercial circuit (Simpson Electric Co.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.12

SUBSTITUTION AND ANALOG MEASUREMENTS 15.12

MEASUREMENT SYSTEMS

across the terminals, the meter deflection varies inversely with the unknown resistance. With the range switch in the position shown, half-scale deflection is obtained when the external resistance is equal to R1 + R2RM / (R2 + RM). A multirange meter can be obtained using current-shunting resistors R3 and R4. A typical commercial ohmmeter circuit (Fig. 15.2.11b) having midscale readings of 12 Ω, 1200 Ω, and 120 kΩ uses an Ayrton shunt for range selection and a higher battery voltage for the highest resistance range. In the shunt-type ohmmeter the unknown resistor Rx is connected across the d’Arsonval milliammeter, as shown in Fig. 15.2.12a. The variable resistance R1 is adjusted for fullscale deflection (infinite-resistance indication) with terminals X1 and X2 open-circuited. The ohm scale, with 0 Ω corresponding to zero deflection, is the reverse of the series-type ohmmeter scale. The resistance range can be lowered by switching a shunt resistor across the meter. With the range switch selecting shunt resistor R2, half-scale deflection occurs when Rx is equal to the parallel combination of R1, RM, and R2. The shunt-type ohmmeter is therefore most suited to low-resistance measurements. The use of a high-input impedance amplifier between the circuit and the d’Arsonval meter permits the shunt-type FIGURE 15.2.12 Shunt-type ohmmeter. (Triplett ohmmeter to be used for high- as well as low-resistance meaElectrical Instrument Co.) surements. A commercial ohmmeter (Fig. 15.2.12b) uses a field-effect-amplifier input stage which draws negligible current. The amplifier gain is adjusted to provide full-scale deflection with terminals X1 and X2 open-circuited. Half-scale deflection occurs when Rx is equal to the total selected tap resistance. Voltage-drop (or fall-of-potential) methods for determining resistance involve measuring the current flowing through the resistor with an ammeter, measuring the voltage drop across the resistor with a voltmeter, and calculating the resistance using Ohm’s law. The circuit of Fig. 15.2.13a should be used for low-resistance measurements since the current drawn by the voltmeter V/Rv will be small with respect to the total current I. The circuit of Fig. 15.2.13b should be used for high-resistance measurements since the resistance of the ammeter RA will be small with respect to the unknown resistance Rx. An accuracy of 1 percent or better can be obtained using 0.5 percent accurate instruments if the voltage source and instrument ranges are selected to provide readings near full scale. Resonance methods can be used to measure the inductance, capacitance, and Q factor of components at radio frequencies. In Fig. 15.2.14, resistors R1 and R2 couple the oscillator voltage e to a series-connected known capacitance and an unknown inductance represented by effective inductance L′ and effective series resistance r′. Resistor R2 is chosen to be small with respect to resistance r′, thereby minimizing the effect of source resistance of the injected voltage.

FIGURE 15.2.13 Fall-of-potential method: (a) for low resistances; (b) for high resistances.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.13

SUBSTITUTION AND ANALOG MEASUREMENTS SUBSTITUTION AND ANALOG MEASUREMENTS

15.13

FIGURE 15.2.14 Inductance measurement.

A circuit containing reactive components is in resonance when the supply current is in phase with the applied voltage. The series circuit of Fig. 15.2.14 is in resonance when the inductive reactance XL′ is equal to the capacitive XC, which occurs when

ω 2 = ω 02 = 1/L ′C where XL′ = wL′, XC = 1/wC, w = 2p f, w0 = 2p f0, f0 = resonant frequency (Hz), L′ = effective inductance (H), and C = capacitance (F). If L′, r′, C, and e are constant and the oscillator frequency w is adjusted until the voltage read across C by the FET input voltmeter is maximum, the frequency w will be slightly less than the resonant frequency w0:

ω2 =

(r ′ + Rs )2 1 − = ω 02 L ′C 2L′ 2

 1   1 − 2Q*2 

where Rs = R1R2/(R1 + R2) and Q* = w0L′/(r′ + Rs). If Q* ≥ 10, w and w0 will differ by less than 0.3 percent. The ratio m of the voltage across C to the voltage across R2 can be measured while operating at w. If Rs 10. If Rs is not small with respect to r′, its value affects the determination of Q′ only indirectly through its effect on w. If Rs = r and Q′ ≥ 10, the determination of Q′ by the above equation is in error by less than 1 percent. If w, L′, r′, and e are constant and the capacitance C is adjusted until the voltage across it is maximum, the capacitance value C will be slightly less than the capacitance value CR needed for resonance at the frequency w: C=

 1  1 1 = CR  2 *2  ω L ′  1 + 1/Q  1 + 1/Q*2

where w = w0, Q* = w0L′/(r′ + Rs), and Rs = R1R2/(R1 + R2). For Q* ≥ 10, C differs from CR by less than 1 percent. If Rs 100 dB

>120 dB

1000 V 200 mV 100 nV ±(0.0007% + 10) ≤ 20 V: 1 GΩ >20 V: 10 MΩ >120 dB

AC voltage Max. range Min. range Sensitivity Basic accuracy (60 dB

750 V 200 mV 10 mV ±(0.5% + 10) 10 MΩ/100 pF >60 dB

750 V 2V 1 mV ±(0.25% + 1000) 2 MΩ/50 pF >60 dB

Ohms Max. range Min. range Sensitivity Basic accuracy

200 Ω 20 mΩ 100 mΩ ±(0.2% + 1)

200 Ω 200 mΩ 10 mΩ ±(0.05% + 2)

200 Ω 200 mΩ 100 mΩ ±(0.01% + 2)

DC current Max. range Min. range Sensitivity Basic accuracy

2A 2 mA 1 mA ±(0.75% + 1)

2A 200 mA 10 nA ±(0.2% + 2)

2A 200 mA 1 nA ±(0.09% + 10)

AC current Max. range Min. range Sensitivity Basic accuracy

2A 2 mA 1 mA ±(1.5% + 2)

2A 200 mA 10 nA ±(1% + 20)

2A 200 mA 1 nA ±(0.6% + 300)

CMRR (1 kΩ unbalance)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

15.15

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.16

SUBSTITUTION AND ANALOG MEASUREMENTS 15.16

MEASUREMENT SYSTEMS

For example, if a meter having a 10-MΩ input resistance is used to measure an unknown emf having an equivalent resistance of 100 KΩ, the meter will read 99 percent of the correct value. Resistance measurements rely on the accuracy of an internal current reference, shown as Ir in Fig.15.2.19. For an ideal meter with infinite input resistance Rin, the unknown resistance Rx is equal to Vm/Ir and the meter is calibrated to read the resistance value directly. For finite values of Rin, the indicated resistance value of Rx is equal to the parallel resistance of Rx and Rin, which can introduce a significant error in the measurement of high-value unknown resistances. The resistance of the FIGURE 15.2.18 Effect of loading. leads connecting the unknown resistance to the meter of Fig. 15.2.19 can introduce an error in the measurement of low-value resistances. This problem is overcome in high-precision meters by using separate terminals and leads to connect the precision current reference to the unknown resistance. In this case the voltage measurement leads carry only the current drawn by the input resistance Rin. Digital electrometers are highly refined dc multimeters which provide exceptionally high input resistance and sensitivity. The block diagram of a typical microcomputer-controlled digital electrometer is shown in Fig. 15.2.20. Performance depends on the high-input-resistance, low-offset-current JFET preamp and operational amplifier as well as the A/D converter. A state-of-the-art 41/2-digit autoranging meter* provides voltage ranges from 200 mV to 200 V, current ranges from 2 pA (2 × 10–12 A) to 20 mA, resistance ranges from 2 kΩ to 200 GΩ (200 × 109 Ω), and Coulomb ranges from 200 pC to 200 nC. Specified accuracies for the most-sensitive ranges are ±(0.05 percent + 4 counts) for voltage, ±(1.6 percent + 66 counts) for current, ±(0.20 percent + 4 counts) for resistance, and ±(0.4 percent + 4 counts) for charge. The input impedance is greater than 200 × 1012 Ω in parallel with 20 pF, and the input bias current is less than 5 × 10–15 A at 23°C. A bipolar 100-V, 0.2 percent-accuracy voltage source programmable in 50-mV steps and a 1-nA to 100-mA decade current source are built into the meter. Two techniques may be used to measure resistance or generate I − V curves: The decade current source can be used to force a known current through the unknown impedance with the voltage measured by the high-input-impedance voltmeter, or the voltage source can be applied across the unknown with the resulting current measured by the current meter. The latter method is preferred for characterizing voltage-dependent materials.

FIGURE 15.2.19 Effect of loading.

*Keithley

Model 617 Electrometer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.17

SUBSTITUTION AND ANALOG MEASUREMENTS SUBSTITUTION AND ANALOG MEASUREMENTS

15.17

FIGURE 15.2.20 Typical digital electrometer.

A triaxial input cable, Fig. 15.2.21, is used with the electrometer to minimize input lead leakage and cable capacitance problems. The outer shield connected to the LO input can be grounded. The inner shield is driven by electrometer unity gain low-impedance output which maintains the guard voltage essentially equal to the high-impedance input signal HI. Insulator leakage through r becomes negligible since there is essentially no voltage difference between the center conductor and the guard. Digital nanovoltmeters are digital voltmeters that are optimized for nanovolt measurements. A state-of-the-art meter uses a JFET input amplifier to obtain an input resistance of 109 Ω and a 5-nF input capacitance. Voltage ranges of 2 mV to 1000 V are available. Specified accuracy for a 24-h period is ± (0.006 percent + 5 counts) when properly zeroed. A special connector having a low thermoelectric voltage is used for the 200-mV and lower ranges.

FIGURE 15.2.21 Guard shield minimizes input leakage.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.18

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 15.3

TRANSDUCER-INPUT MEASUREMENT SYSTEMS Francis T. Thompson

Transducers are used to convert the quantity to be measured into an electric signal. Transducer types and their input and output quantities are discussed in Sec. 8.

TRANSDUCER SIGNAL CIRCUITS Amplifiers are often required to increase the voltage and power levels of the transducer output and to prevent excessive loading of the transducer by the measurement system. The design of the amplifier is a function of the performance specifications, which include required amplification in terms of voltage gain or power gain, frequency response, distortion permissible at a given maximum signal level, dynamic range, residual noise permissible at the output, gain stability, permissible drift (for dc amplifiers), operating-temperature range, available supply voltage, permissible power consumption and dissipation, reliability, size, weight, and cost. Capacitive-coupled amplifiers (ac amplifiers) are used when it is not necessary to preserve the dc component of the signal. AC amplifiers are used with transducers that produce a modulated carrier signal. Low-level amplifiers increase the signal from millivolts to several volts. The two-stage class A capacitor-coupled transistor amplifier of Fig. 15.3.1 has a power gain of 64 dB and a voltage gain of approximately 1000. Design information, an explanation of biasing, and equations for calculating the input impedance and various gain values are given in Walston and Miller (see Chap. 15.5). An excellent ac amplifier can be obtained by connecting a coupling capacitor in series with resistor R1 of the operational amplifier of Fig. 15.3.4. The capacitor should be selected so that C > 1/2p fR1, where f is the lowest signal frequency to be amplified. Class B transformercoupled amplifiers, which are often used for higher power-output stages, are also discussed in Walston and Miller (see Chap. 15.5). Direct-coupled amplifiers are used when the dc component of the signal must be preserved. These designs are more difficult than those of capacitive-coupled amplifiers because changes in transistor leakage currents, gain, and base-emitter voltage drops can cause the output voltage to change for a fixed input voltage, i.e., cause a dc-stability problem. The dc stability of an amplifier is determined primarily by the input stage since the equivalent input drift introduced by subsequent stages is equal to their drift divided by the preceding gain. Balanced input stages, such as the differential amplifier of Fig. 15.3.2, are widely used because drift components tend to cancel. By selecting a pair of transistors, Q1 and Q2, which are matched for current gain within 10 percent and base-to-emitter voltage within 3 mV, the temperature drift referred to the input can be held to within 10 mV/°C. Transistor Q3 acts as a constant-current source and thereby increases the ability of the amplifier to reject common-mode input voltages. For applications where the generator resistance rg is 15.18 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.19

TRANSDUCER-INPUT MEASUREMENT SYSTEMS TRANSDUCER-INPUT MEASUREMENT SYSTEMS

FIGURE 15.3.1 Two-stage cascaded commonemitter capacitive audio amplifier.

15.19

FIGURE 15.3.2 Differential amplifier.

greater than 50 kΩ, current offset becomes dominant, and lower overall drift can be obtained by using fieldeffect transistors in place of the bipolar transistors Q1 and Q2. Voltage drifts as low as 0.4 mV/°C can be obtained using integrated-circuit operational amplifiers (see Table 15.3.1). Operational amplifiers are widely used for amplifying low-level ac and dc signals. They usually consist of a balanced input stage, a number of direct-coupled intermediate stages, and a low-impedance output stage. They provide high open-loop gain, which permits the use of a large amount of gain-stabilizing negative feedback. High-gain bipolar and JFET transistors are often used in the balanced input stage. The bipolar input provides lower voltage offset and offset voltage drift while the JFET input provides higher input impedance, lower bias current, and lower offset current as can be seen in Table 15.3.1 which compares the typical specifications of three high-performance operational amplifiers. The chopper-stabilized CMOS operational amplifier provides the low offset and bias currents of FET transistors while using internal chopper stabilization to achieve excellent offset voltage characteristics. With input voltages, e1, e2, and ecm equal to zero, the output of the amplifier of Fig. 15.3.3 will have an offset voltage Eos defined by Eos = Vos

R R ( R + R2 ) R1 + R2 + I b1R2 − I b 2 3 4 1 R1 R1 ( R3 + R4 )

TABLE 15.3.1 Operational Amplifier Comparison Typical parameter (except where noted) Input characteristics Offset voltage Maximum offset voltage Avg. offset voltage drift Bias current Bias current avg. drift Maximum offset current Offset current avg. drift Differential input resistance Transfer characteristics Voltage gain Common-mode rejection ratio

Temp.

Bipolar input (Harris 5130)

JFET input (Harris 5170)

Chopper-stabilized CMOS (Siliconix 7652)

25°C 25°C Full 25°C Full 25°C Full 25°C

10 mV 25 mV 0.4 mV/°C 1 nA 20 pA/°C 2 nA 20 pA/°C 30 × 106 Ω

100 mV 300 mV 2 mV/°C 0.02 nA 3 pA/°C 0.03 nA 0.3 pA/°C 6 × 1010 Ω

0.7 mV 5 mV 0.01 mV/°C 0.015 nA

25°C

140 dB

116 dB

150 dB

25°C

120 dB

100 dB

130 dB

0.06 nA 1012 Ω

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.20

TRANSDUCER-INPUT MEASUREMENT SYSTEMS 15.20

MEASUREMENT SYSTEMS

FIGURE 15.3.3 Operational amplifier.

where Ib1 and Ib2 are bias currents that flow into the amplifier when the output is zero and Vos is the input offset voltage that must be applied across the input terminals to achieve zero output. The input bias current specified for an operational amplifier is the average of Ib1 and Ib2. Since the bias currents are approximately equal, it is desirable to choose the parallel combination of R3 and R4 equal to the parallel combination of R1 and R2. For this case, Eos = Vos(R1 + R2)/R1 + Ios R2, where the offset current Ios = Ib1 – Ib2. In the ideal case, where Vos and Ios are zero, the output voltage E0 as a function of signal voltage e1 and e2 and common-mode voltage ecm is E0 = − e1

R ( R + R2 ) − R2 ( R3 + R4 ) R2 R ( R + R2 ) + e2 4 1 + ecm 4 1 R1 R1 ( R3 + R4 ) R1 ( R3 + R4 )

Maximum common-mode rejection can be obtained by choosing R4/R3 = R2/R1, which reduces the above equation to E0 = R2(e2 – e1)/R1. The common-mode signal is not entirely rejected in an actual amplifier but will be reduced relative to a differential signal by the common-mode rejection ratio of the amplifier. Minimum drift and maximum common-mode rejection, which are important when terminating the wires from a remote transducer, can be obtained by selecting R3 = R1 and R4 = R2. Where common-mode voltages are not a problem, the simple inverting amplifier (Fig. 15.3.4) is obtained by replacing ecm and e2 with short circuits and combining R3 and R4 into one resistor, which is equal to the parallel equivalent of R1 and R2. The input impedance of this circuit is equal to R1. Similarly, the simple noninverting amplifier (Fig. 15.3.5) is obtained by replacing ecm and e1 with short circuits. The voltage follower is a special case of the noninverting amplifier where R1 = ∞ and R2 = 0. The input impedance of the circuit of Fig. 15.3.5 is equal to the parallel combination of the common-mode input impedance of the amplifier and impedance Zid [1 + (AR1)/(R1 + R2)], where Zid is the differential-mode amplifier input impedance and A is the amplifier gain. Where very high input impedances are required, as in electrometer circuits, operational amplifiers having field-effect input transistors are used to provide input resistances up to 1012 Ω.

FIGURE 15.3.4 Inverting amplifier.

FIGURE 15.3.5 Noninverting amplifier.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.21

TRANSDUCER-INPUT MEASUREMENT SYSTEMS TRANSDUCER-INPUT MEASUREMENT SYSTEMS

15.21

An ac-coupled amplifier can be obtained by connecting a coupling capacitor in series with the input resistor of Fig. 15.3.4. The capacitor value should be selected so that the capacitive reactance at the lowest frequency of interest is lower than the amplifier input impedance R1. Operational amplifiers are useful for realizing filter networks and integrators. Other applications include absolute-value circuits, logarithmic converters, nonlinear amplification, voltage-level detection, function generation, and analog multiplication and division. Care should be taken not to exceed the maximum supply voltage and maximum common-mode voltage ratings and also to be sure that the load resistance RL is not smaller than that permitted by the rated output. The charge amplifier is used to amplify the ac signals from variable-capacitance transducers and transducers having a capacitive impedance such as piezoelectric transducers. In the simplified circuit of Fig. 15.3.6a, the current through Cs is equal to the current through C1, and therefore Cs

∂es ∂C de + es s = −C1 o ∂t ∂t dt

For the piezoelectric transducer, Cs is assumed constant, and the gain deo/des = −Cs/C1. For the variablecapacitance transducer, es is constant, and the gain deo/ dCs = –es/C1. A practical circuit requires a resistance across C1 to limit output drift. The value of this resistance must be greater than the impedance of C1 at the lowest frequency of interest. A typical operational amplifier having field-effect input transistors has a specified maximum input current of 1 nA, which will result in an output offset of only 0.1 V if a 100-MΩ resistance is used across C1. It is preferable to provide a high effective resistance by using a network of resistors, each of which has a value of 1 MΩ or less. FIGURE 15.3.6 Charge amplifier. The effective feedback resistance R′ in the practical circuit of Fig. 15.3.6b is given by R′ = R3(R1 + R2)/R2, assuming that R3 > 10R1R2/(R1 + R2). Output drift is further reduced by selecting R4 = R3 + R1R2/(R1 + R2). Resistor R5 is used to provide an upper frequency rolloff at f = 1/2pR5Cs, which improves the signal-to-noise ratio. Amplifier-gain stability is enhanced by the use of feedback since the gain of the amplifier with feedback is relatively insensitive to changes in the open-loop amplifier gain G provided that the loop gain GH is high. For example, if the open-loop gain G changes by 10 percent from 100,000 to 90,000 and the feedback divider gain H remains constant at 0.01, the closed-loop gain G/(1 + GH) ≈ 99.9 changes only 0.011 percent. If a high closed-loop gain is required, simply decreasing the value of H will reduce the value of GH and thereby reduce the accuracy. The desired accuracy can be maintained by cascading two or more amplifiers, thereby reducing the closed-loop gain required from each amplifier. Each amplifier has its own individual feedback, and no feedback is applied around the cascaded amplifiers. In this case, it is unwise to cascade more stages than needed to achieve a reasonable value of GH in each individual amplifier, since excessive loop gain will make the individual stages more prone to oscillation and the overall system will exhibit increased sensitivity to noise transients. Chopper amplifiers are used for amplifying dc signals in applications requiring very low drift. The dc input signal is converted by a chopper to a switched ac signal having an amplitude proportional to the input signal and a phase of 0 or 180° with respect to the chopper reference frequency, depending on the polarity of the input signal. This ac signal is amplified by an ac amplifier, which eliminates the drift problem, and then is converted back into a proportional dc output voltage by a phase-sensitive demodulator. The frequency response of a chopper amplifier is theoretically limited to one-half the carrier frequency. In practice, however, the frequency response is much lower than the theoretical limit. High-frequency components in the input signal exceeding the theoretical limit are removed to avoid unwanted beat signals with the chopper frequency.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.22

TRANSDUCER-INPUT MEASUREMENT SYSTEMS 15.22

MEASUREMENT SYSTEMS

FIGURE 15.3.7 Chopper amplifier.

The chopper amplifier of Fig. 15.3.7 consists of a low-pass filter to attenuate high frequencies, an input chopper, an ac amplifier, a phase-sensitive demodulator, and a low-pass output filter to attenuate the chopper ripple component in the output signal. The frequency response of this amplifier is limited to a small fraction of the chopper frequency. The frequency-response limitation of the chopper amplifier can be overcome by using the chopper amplifier for the dc and low-frequency signals and a separate ac amplifier for the higher-frequency signals, as shown in Fig. 15.3.8. Simple shunt field-effect-transistor choppers Q1 and Q2 are used for modulation and detection, respectively. Capacitor CT is used to minimize spikes at the input to the ac amplifier. A CMOS auto-zeroed operational amplifier, which contains a built-in chopper amplifier, has typical specifications of 0.7 mV input offset voltage, 0.01 mV/°C average offset voltage drift, and 15 pA input bias current (see Table 15.3.1). Modulator-demodulator systems avoid the drift problems of dc amplifiers by using a modulated carrier which can be amplified by ac amplifiers (Fig. 15.3.9). Inputs and outputs may be either electrical or mechanical. The varactor modulator (Fig. 15.3.10) takes advantage of the variation of diode-junction capacitance with voltage to modulate a sinusoidal carrier. The carrier and signal voltages applied to the diodes are small, and the diodes never reach a low-resistance condition. Input bias currents of the order of 0.01 pA are possible. For zero signal input, the capacitance values of the diodes are equal, and the carrier signals coupled by the diodes cancel. A dc-input signal will increase the capacitance of one diode while decreasing the capacitance of the other and thereby produce a carrier imbalance signal which is coupled to the ac amplifier by capacitor C2. A phasesensitive demodulator, such as field-effect transistor Q2 of Fig. 15.3.8, may be used to recover the dc signal. The magnetic amplifier and second-harmonic modulator can also be used to convert dc-input signals to modulation on a carrier, which is amplified and later demodulated. Mechanical-input modulators include acdriven potentiometers, linear variable differential transformers, and synchros. The amplified ac carrier can be converted directly into a mechanical output by a two-phase induction servomotor.

FIGURE 15.3.8 Chopper-stabilized dc amplifier.

FIGURE 15.3.9 Modulator-demodulator system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.23

TRANSDUCER-INPUT MEASUREMENT SYSTEMS TRANSDUCER-INPUT MEASUREMENT SYSTEMS

FIGURE 15.3.10 Basic varactor modulator.

15.23

FIGURE 15.3.11 Analog integrator.

Integrators are often required in systems where the transducer signal is a derivative of the desired output, e.g., when an accelerometer is used to measure the velocity of a vibrating object. The output of the analog integrator of Fig. 15.3.11 consists of an integrated signal term plus error terms caused by the offset voltage Vos and the bias currents Ib1 and Ib2 e0 =

1 1 e dt + (V + I R − I R ) dt R1C ∫ 1 R1C ∫ os b1 1 b 2 2

These error terms will cause the integrator to saturate unless the integrator is reset periodically or a feedback path exists, which tends to drive the output toward a given level within the linear range. In the accelerometer integrator, accurate integration may not be required below a given frequency, and the desired stabilizing feedback path can be introduced by incorporating a large effective resistance across the capacitor using the technique shown in Fig. 15.3.6b. Digital processing of analog quantities is frequently used because of the availability of high-performance analog-to-digital (A/D) converters, digital-to-analog (D/A) converters, microprocessors, and other special digital processors. The analog input signal (Fig. 15.3.12) is converted into a sequence of digital values by the A/D converter. The digital values are processed by the microprocessor (see “The microprocessor”) or other digital processor, which can be programmed to provide a wide variety of functions including linear or nonlinear gain characteristics, digital filtering, integration, differentiation, modulation, linearization of signals from nonlinear transducers, computation, and self-calibration. The Texas Instruments TMS 320 series of digital processors are specially designed for these applications. The digital output may be used directly or converted back into an analog signal by the D/A converter. Commercial A/D converter modules are available with 15-bit resolution and 17-ms conversion times using the successive-approximation technique (Analog Devices AD 376, Burr Brown ADC 76). Monolithic integratedcircuit A/D converters are available with 12-bit resolution and 6-ms conversion times (Harris HI-774A). An 18bit D/A converter with a full-scale current output of 100 mA and a compliance voltage range of ±12 V has been

FIGURE 15.3.12 Digital processing of analog signals.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.24

TRANSDUCER-INPUT MEASUREMENT SYSTEMS 15.24

MEASUREMENT SYSTEMS

built at the National Bureau of Standards (Schoenwetter). It exhibits a settling a time to 1/2 least significant bit (2 ppm) of less than 20 ms. Commercial 16- and 18-bit D/A converters are available with 2-mA full-scale outputs (Analog Devices DAC1136/1138, Burr Brown DAC 70). The application of A/D converters, D/A converters, and sample-and-hold (S/H) circuits requires careful consideration of their static and dynamic characteristics. The microprocessor is revolutionizing instrumentation and control. The digital processing power that previously cost many thousands of dollars has been made available on a silicon integrated-circuit chip for only a few dollars. A microcomputer system (Fig. 15.3.13) consists of a microprocessor central processing unit (CPU), memory, and input-output ports. A typical microprocessorbased system consists of the 8051 microprocessor, which provides 4 kbytes of read-only memory (ROM), 128 bytes of random-access memory (RAM), 2 timers and 32 inputoutput lines, and a number of additional RAM and ROM memory chips as needed for a particular application. The microprocessor can provide a number of features at little FIGURE 15.3.13 Microcomputer system. incremental cost by means of software modification. Typical features include automatic ranging, self-calibration, self-diagnosis, data conversion, data processing, linearization, regulation, process monitoring, and control. Instruments using microprocessors are described in Chaps. 15.2 and 15.4. Instrument systems can be automated by connecting the instruments to a computer using the IEEE-448 general purpose interface bus (Santoni). The bus can connect as many as 14 individually addressable instruments to a host computer port. Full control capability requires the selection of instruments that permit bus control of front panel settings as well as bus data transfers. Voltage comparators are useful in a number of applications, including signal comparison with a reference, positive and negative signal-peak detection, zero-crossing detection, A/D successive-approximation converter systems, crystal oscillators, voltage-controlled oscillators, magnetic-transducer voltage detection, pulse generation, and square-wave generation. Comparators with input offset voltages of 2 mV and input offset currents of 5 nA are commercially available. Analog switches using field-effect-transistor (FET) technology are used in general-purpose high-level switching (±10 V), multiplexing, A/D conversion, chopper applications, set-point adjustment, and bridge circuits. Logic and schematic diagrams of a typical bilateral switch are shown in Fig. 15.3.14. The switch provides the required isolation between the data signal and the control signal. Typical switches provide zero offset voltage, on resistance of 35 Ω, and leakage current of 0.04 nA. Multiple-channel switches are commercially available in a single package. Output Indicators. A variety of analog and digital output indicators can be used to display and record the output from the signal-processing circuitry.

FIGURE 15.3.14 Bilateral analog switch.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.25

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 15.4

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS Francis T. Thompson

PRINCIPLES OF BRIDGE MEASUREMENTS Bridge circuits are used to determine the value of an unknown impedance in terms of other impedances of known value. Highly accurate measurements are possible because a null condition is used to compare ratios of impedances. The most common bridge arrangement (Fig. 15.4.1) contains four branch impedances, a voltage source, and a null detector. Galvanometers, alone or with chopper amplifiers, are used as null detectors for dc bridges; while telephone receivers, vibration galvanometers, and tuned amplifiers with suitable detectors and indicators are used for null detection in ac bridges. The voltage across an infinite-impedance detector is Vd =

( Z1Z 3 − Z 2 Z x ) E ( Z1 + Z 2 ) ( Z 3 + Z x )

If the detector has a finite impedance Z5, the current in the detector is Id =

( Z1Z 3 − Z 2 Z s ) E Z 5 ( Z1 + Z 2 ) ( Z3 + Z x ) + Z1Z 2 ( Z3 + Z x ) + Z 3Z x ( Z1 + Z 2 )

where E is the potential applied across the bridge terminals. A null or balance condition exists when there is no potential across the detector. This condition is satisfied, independent of the detector impedance, when Z1Z3 = Z2Zx. Therefore, at balance, the value of the unknown impedance Zx can be determined in terms of the known impedances Z1, Z2, and Z3: Zx = Z1Z3/Z2 Since the impedances are complex quantities, balance requires that both magnitude and phase angle conditions be met: |Zx| = |Z1| ⋅| Z3|/|Z2| and qx = q1 + q3 − q2. Two of the known impedances are usually fixed impedances, while the third impedance is adjusted in resistance and reactance until balance is attained.

15.25 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.26

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.26

MEASUREMENT SYSTEMS

The sensitivity of the bridge can be expressed in terms of the incremental detector current ∆Id for a given small per-unit deviation d of the adjustable impedance from the balance value. If Z1 is adjusted, d = ∆Z1/Z1 and ∆ Id =

Z 3Z x Eδ ( Z 3 + Z x )2 [ Z 5 + Z1Z 2 /( Z1 + Z 2 ) + Z 3Z x /( Z 3 + Z x )]

when Z5 is the detector impedance. If a high-input-impedance amplifier is used for the detector and impedance Z5 can be considered infinite, the sensitivity can be expressed in terms of the incremental input voltage to the detector ∆Vd for a small deviation from balance ∆Vd = Z 3Z x Eδ /( Z 3 + Z x )2 = Z1Z 2 Eδ /( Z1 + Z 2 )2 where d = ∆Z1/Z1 and ∆Z1 is the deviation of impedance Z1 from its balance value Z1. Maximum sensitivity occurs when the magnitudes of Z3 and Zx are equal (which for balance implies that the magnitudes of Z1 and Z2 are equal). Under this condition, ∆Vd = Ed/4 when the phase angles q3 and qx are equal; ∆Vd = Ed/2 when the phase angles q3 and qx are in quadrature; and ∆Vd is infinite when q3 = −qx, as is the case with lossless reactive components of opposite sign. In practice, the value of the adjustable impedance must be sufficiently large to ensure that the resolution provided by the finest adjusting step permits the desired precision to be obtained. This value may not be compatible with the highest sensitivity, but adequate sensitivity can be obtained for an order-of-magnitude difference between Z3 and Zx or Z1 and Z2, especially if a tuned-amplifier detector is used. FIGURE 15.4.1 Basic impedance bridge.

Interchanging the source and detector can be shown to be equivalent to interchanging impedances Z1 and Z3. This interchange does not change the equation for balance but does change the sensitivity of the bridge. For a fixed applied voltage E higher sensitivity is obtained with the detector connected from the junction of the two high-impedance arms to the junction of the two low-impedance arms. The source voltage must be carefully selected to ensure that the allowable power dissipation and voltage ratings of the known and unknown impedances of the bridge are not exceeded. If the bridge impedances are low with respect to the source impedance Zs, the bridge-terminal voltage E will be lowered. This can adversely affect the sensitivity, which is proportional to E. The source for an ac bridge should provide a pure sinusoidal voltage since the harmonic voltages will usually not be nulled when balance is achieved at the fundamental frequency. A tuned detector is helpful in achieving an accurate balance. Balance Convergence. The process of balancing an ac bridge consists of making successive adjustments of two parameters until a null is obtained at the detector. It is desirable that these parameters do not interact and that convergence be rapid. The equation for balance can be written in terms of resistances and reactances as Rx + jX x = ( R1 + jX1 )( R3 + jX 3 )/( R2 + jX 2 ) Balance can be achieved by adjusting any or all of the six known parameters, but only two of them need be adjusted to achieve the required equality of both magnitude and phase (or real and imaginary components). In a ratio-type bridge, one of the arms adjacent to the unknown, either Z1 and Z3, is adjusted. Assuming that Z1 is adjusted, then to make the resistance adjustment independent of the change in the corresponding reactance, the ratio (R3 + jX3)/(R2 + jX2) must be either real or imaginary but not complex. If this ratio is equal to the real number k, then for balance Rx = kR1 and Xx = kX1. In a product-type bridge, the arm opposite the unknown Z2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.27

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.27

FIGURE 15.4.2 Linearized convergence locus.

is adjusted for balance, and the product of Z1Z3 must be either real or imaginary to make the resistance adjustment independent of the reactance adjustment. Near balance, the denominator of the equation giving the detector voltage (or current) changes little with the varied parameter, while the numerator changes considerably. The usual convergence loci, which consist of circular segments, can be simplified to obtain linear convergence loci by assuming that the detector voltage near balance is proportional to the numerator, Z1Z3 − Z2Zx. Values of this quantity can be plotted on the complex plane. When only a single adjustable parameter is varied, a straight-line locus will be produced as shown in Fig. 15.4.2. Varying the other adjustable parameter will produce a different straight-line locus. The rate of convergence to the origin (balance condition) will be most rapid if these two loci are perpendicular, slow if they intersect at a small angle, and zero if they are parallel. The cases of independent resistance and reactance adjustments described above correspond to perpendicular loci.

RESISTANCE BRIDGES The Wheatstone bridge is used for the precise measurement of two-terminal resistances. The lower limit for accurate measurement is about 1 Ω, because contact resistance is likely to be several milliohms. For simple galvanometer detectors, the upper limit is about 1 MΩ, which can be extended to 1012 by using a high-impedance high-sensitivity detector and a guard terminal to substantially eliminate the effects of stray leakage resistance to ground. The Wheatstone bridge (Fig. 15.4.3) although historically older, may be considered as a resistance version of the impedance bridge of Fig. 15.4.1, and therefore the sensitivity equations are applicable. At balance Rx = R1R3/R2 Known fixed resistors, having values of 1, 10, 100, or 1000 Ω, are generally used for two arms of the bridge, for example, R2 and R3. These arms provide a ratio R3/R2, which can be selected from 10–3 to 103. Resistor R1, typically adjustable to 10,000 Ω in 1- or 0.1-Ω steps, is adjusted to achieve balance. The ratio R3/R2 should be chosen so that R1 can be read to its full precision. The magnitudes of R2 and R3 should be chosen to maximize the sensitivity while taking care not to draw excessive current.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.28

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.28

MEASUREMENT SYSTEMS

FIGURE 15.4.4 Kelvin double bridge. A + B is typically 1000 Ω and a + b is typically 1000 Ω.

FIGURE 15.4.3 Wheatstone bridge.

An alternate arrangement using R1 and R2 for the ratio resistors and adjusting R3 for balance will generally provide a different sensitivity. The battery key B should be depressed first to allow any reactive transients to decay before the galvanometer key is depressed. The low-galvanometer-sensitivity key L should be used until the bridge is close to balance. The high-sensitivity key H is then used to achieve the final balance. Resistance RD provides critical damping between the galvanometer measurements. The battery connections to the bridge may be reversed and two separate resistance determinations made to eliminate any thermoelectric errors. The Kelvin double bridge (Fig. 15.4.4) is used for the precise measurement of low-value four-terminal resistors in the range 1 mΩ to 10 Ω. The resistance to be measured X and a standard resistance S are connected by means of their current terminals in a series loop containing a battery, an ammeter, an adjustable resistor, and a low-resistance link l. Ratio-arm resistances A and B and a and b are connected to the potential terminals of resistors X and S as shown. The equation for balance is X=S

βl  A α  A + − B α + β + l  B β 

If the ratio a/b is made equal to the ratio A/B, the equation reduces to X = S(A/B). The equality of the ratios should be verified after the bridge is balanced by removing the link. If a /b = A/B, the bridge will remain balanced. Lead resistances r1, r2, r3, and r4 between the bridge and the potential terminals of the resistors may contribute to ratio imbalance unless they have the same ratio as the arms to which they are connected. Ratio imbalance caused by lead resistance can be compensated by shunting a or b with a high resistance until balance is obtained with the link removed. In some bridges a fixed standard resistor S having a value of the same order of magnitude as resistor X is used. Fixed resistors of 10, 100, or 1000 Ω are used for two arms, for example, B and b, with B and b having equal values. Bridge balance is obtained by adjusting tap switches to select equal resistances for the other two arms, for example, A and a, from values adjustable up to 1000 Ω in 0.1-Ω steps. In other bridges, only decimal ratio resistors are provided for A, B, a, and b, and balance is obtained by means of an adjustable standard having nine steps of 0.001 Ω each and a Manganin slide bar of 0.0011 Ω.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.29

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.29

The battery connection should be reversed and two separate resistance determinations made to eliminate thermoelectric errors. The dc-comparator ratio bridge (Fig. 15.4.5) is used for very precise measurement of four-terminal resistors. Its accuracy and stability depend mainly on the turns ratio of a precision transformer. The master current supply is set at a convenient fixed value Ix. The zero-flux detector maintains an ampere-turn balance, IxNx = IsNs, by automatically adjusting the current Is from the slave supply as Nx is manually adjusted. A null reading on the galvanometer is obtained when IsRs = IxRx. Since the current ratio is precisely related to the turns ratio, the unknown resistance Rx = NxRs /Ns. Fractional turn FIGURE 15.4.5 Comparator ratio bridge. resolution for Nx can be obtained by diverting a fraction of the current Ix as obtained from a decade current divider through an additional winding on the transformer. Turns ratios have been achieved with an accuracy of better than 1 part in 107. The zero-flux detector operates by superimposing a modulating mmf on the core using modulation and detector windings in a second-harmonic modulator configuration. The limit sensitivity of the bridge is set by noise and is about 3 mA turns. Murray and Varley bridge circuits are used for locating faults in wire lines and cables. The faulted line is connected to a good line at one end by means of a jumper to form a loop. The resistance r of the loop is measured using a Wheatstone bridge. The loop is then connected as shown in Fig. 15.4.6 to form a bridge in which one arm contains the resistance Rx between the test set and the fault and the adjacent arm contains the remainder of the loop resistance. The galvanometer detector is connected across the open terminals of the loop, while the voltage supply is connected between the fault and the junction of fixed resistor R2 and variable resistor R3. When balance is attained Rx = rR3/(R2 + R3) where r is the resistance of the loop. Resistance Rx is proportional to the distance to the fault. In the Varley loop of Fig. 15.4.7, variable resistor R1 is adjusted to achieve balance and Rx =

rR3 − R1R2 R2 + R3

where r is the resistance of the loop.

FIGURE 15.4.6 Murray loop-bridge circuits.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.30

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.30

MEASUREMENT SYSTEMS

FIGURE 15.4.7 Varley loop circuit.

INDUCTANCE BRIDGES General. Many bridge types are possible since the impedance of each arm may be a combination of resistances, inductances, and capacitances. A number of popular inductance bridges are shown in Fig. 15.4.8. In the balance equations L and M are given in henrys, C in farads, and R in ohms; w is 2p times the frequency in hertz. The Q of an inductance is equal to wL/R, where R is the series resistance of the inductance. The symmetrical inductance bridge (Fig. 15.4.8a) is useful for comparing the impedance of an unknown inductance with that of a known inductance. An adjustable resistance is connected in series with the inductance having the higher Q, and the inductance and resistance values of this resistance are added to those of the associated inductance to obtain the impedance of that arm. If this series resistance is adjusted along with the known inductance to obtain balance, the resistance and reactance balances are independent and balance convergence is rapid. If only a fixed inductance is available, the series resistance is adjusted along with the ratio R3/R2 until balance is obtained. These adjustments are interacting, and the rate of convergence will be proportional to the Q of the unknown inductance. Care must be taken to avoid inductive coupling between the known and unknown inductances since it will cause a measurement error. The Maxwell-Wien bridge (Fig. 15.4.8b) is widely used for accurate inductance measurements. It has the advantage of using a capacitance standard which is more accurate and easier to shield and produces practically no external field. R2 and C2 are usually adjusted since they provide a noninteracting resistance and inductance balance. If C2 is fixed and R2 and R1 or R3 are adjusted, the balance adjustments interact and balancing may be tedious. Anderson’s bridge (Fig. 15.4.8c) is useful for measuring a wide range of inductances with reasonable values of fixed capacitance. The bridge is usually balanced by adjusting r and a resistance in series with the unknown inductance. Preferred values for good sensitivity are R1 = R2 = R3/2 = Rx /2 and L/C = 2R2x. This bridge is also used to measure the residuals of resistors using a substitution method to eliminate the effects of residuals in the bridge elements. Owen’s bridge (Fig. 15.4.8d ) is used to measure a wide range of inductance values in terms of resistance and capacitance. The inductance and resistance balances are independent if R3 and C3 are adjusted. The bridge can also be balanced by adjusting R1 and R3. This bridge is useful for finding the incremental inductance of iron-cored inductors to alternating current superimposed on a direct current. The direct current can be introduced by connecting a dc-voltage source with a large series inductance across the detector branch. Low-impedance blocking capacitors are placed in series with the detector and the ac source. Hay’s bridge (Fig. 15.4.8e) is similar to the Maxwell-Wien bridge and is used for measuring inductances having large values of Q. The series R2 C2 arrangement permits the use of smaller resistance values than the parallel arrangement. The frequency-dependent 1/Q 2x term in the inductance equation is inconvenient since the dials cannot be calibrated to indicate inductance directly unless the term is neglected, which causes a 1 percent error for Qx = 10.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.31

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

FIGURE 15.4.8 Inductance bridges: (a) symmetrical inductance bridge; (b) MaxwellWien bridge; (c) Anderson’s bridge; (d) Owen’s bridge; (e) Hay’s bridge; ( f ) Campbell’s bridge.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

15.31

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.32

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.32

MEASUREMENT SYSTEMS

This bridge is also used for determining the incremental inductance of iron-cored reactors, as discussed for Owen’s bridge. Campbell’s bridge (Fig. 15.4.8f ) for measuring mutual inductance makes possible the comparison of unknown and standard mutual inductances having different values. The resistances and self-inductances of the primaries are balanced with the detector switches to the right by adjusting L1 and R1. The switches are thrown to the left, and the mutual-inductance balance is made by adjusting Ms. Care must be taken to avoid coupling between the standard and unknown inductances.

CAPACITANCE BRIDGES Capacitance bridges are used to make precise measurements of capacitance and the associated loss resistance in terms of known capacitance and resistance values. Several different bridge circuits are shown in Fig. 15.4.9. In the balance equations R is given in ohms and C in farads, and w is 2p times the frequency in hertz. The loss angle d of a capacitor may be expressed either in terms of its series loss resistance rs, which gives tan d = w Crs, or in terms of the parallel loss resistance rp, in which case, tan d = 1/w Crp.

FIGURE 15.4.9 Capacitance bridges: (a) series-resistance-capacitance bridge; (b) Wien bridge; (c) Schering’s bridge; (d) transformer bridge.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.33

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.33

The Series RC bridge (Fig. 15.4.9a) is a resistance-ratio bridge used to compare a known capacitance with an unknown capacitance. The adjustable series resistance is added to the arm containing the capacitor having the smaller loss angle d. The Wien bridge (Fig. 15.4.9b) is useful for determining the equivalent capacitance Cx and parallel loss resistance Rx of an imperfect capacitor, e.g., a sample of insulation or a length of cable. An important application of the Wien bridge network is its use as the frequency-determining network in RC oscillators. Schering’s bridge (Fig. 15.4.9c) is widely used for measuring capacitance and dissipation factors. The unknown capacitance is directly proportional to known capacitance C1. The dissipation factor w CxRx can be measured with good accuracy using this bridge. The bridge is also used for measuring the loss angles of highvoltage power cables and insulators. In this application, the bridge is grounded at the R2 /R3 node, thereby keeping the adjustable elements R2, R3, and C2 at ground potential. The transformer bridge is used for the precise comparison of capacitors, especially for three-terminal shielded capacitors. A three-winding toroidal transformer having low leakage reactance is used to provide a stable ratio, known to better than 1 part in 107. In Fig. 15.4.9d capacitors C1 and C2 are being compared, and a balance scheme using inductive-voltage dividers a and b is shown. It is assumed that C1 > C2 and loss angle d2 > d1. In-phase current to balance any inequality in magnitude between C1 and C2 is injected through C5 while quadrature current is supplied by means of resistor R and current divider C3/(C3 + C4). The current divider permits the value of R to be kept below 1 MΩ. Fine adjustments are provided by dividers a and b. Na is the fraction of the voltage E1 that is applied to R, while Nb is the fraction of the voltage E2 applied to C5, d1 is the loss angle of capacitor C1 and tan d1 = w C1r1, where r1 is the series loss resistance associated with C1. The impedance of C3 and C4 in parallel must be small compared with the resistance of R. The substitution-bridge method is particularly valuable for determining the value of capacitance at radtio frequency. The shunt-substitution method is shown for the series RC bridge in Fig. 15.4.10. Calibrated adjustable standards Rs and Cs are connected as shown, and the bridge is balanced in the usual manner with the unknown capacitance disconnected. The unknown is then connected in parallel with Cs, and Cs and Rs are readjusted to obtain balance. The unknown capacitance Cx and its equivalent series resistance Rx are determined by the rebalancing changes ∆Cs and ∆Rs in Cs and Rs, respectively: Cx = ∆Cs and Rx = ∆Rs(Cs1/Cx)2, where Cs1 is the value of Cs in the initial balance. In series substitution the bridge arm is first balanced with the standard elements alone, the standard elements having an impedance of Zs1, and then the unknown is inserted in series with the standard elements. The standard elements are readjusted to an impedance Zs2 to restore balance. FIGURE 15.4.10 Substitution measurement. The unknown impedance Zx is equal to the change in the standard impedance, that is, Zx = Zs1 − Zs2. Measurement accuracy depends on the accuracy with which the changes in the standard values are known. The effects of residuals, stray capacitance, stray coupling, and inaccuracies in the impedances of the other three bridges arms are minimal, since these effects are the same with and without the unknown impedance. The proper handling of the leads used to connect the unknown impedance can be important.

FACTORS AFFECTING ACCURACY Stray Capacitance and Residuals. The bridge circuits of Figs. 15.4.8 and 15.4.9 are idealized since stray capacitances which are inevitably present and the residual inductances associated with resistances and connecting leads have been neglected. These spurious circuit elements can disturb the balance conditions and

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.34

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.34

MEASUREMENT SYSTEMS

FIGURE 15.4.11 Stray capacitances in unshielded and ungrounded bridge.

FIGURE 15.4.12 Bridge with shields and ground.

result in serious measurement errors. Detailed discussions of the residuals associated with the various bridges are given in Souders. Shielding and grounding can be used to control errors caused by stray capacitance. Stray capacitances in an ungrounded, unshielded series RC bridge are shown schematically by C1 through C12 in Fig. 15.4.11. The elements of the bridge may be enclosed in the grounded metal shield, as shown schematically in Fig. 15.4.12. Shielding and grounding eliminate some capacitances and make the others definite localized capacitances which act in a known way as illustrated in Fig. 15.4.13. The capacitances associated with terminal D shunt the oscillator and have no adverse effect. The possible adverse effects of the capacitance associated with the output diagonal EF are overcome by using a shielded output transformer. If the shields are adjusted so that C22/C21 = Ra/Rb, the ratio of the bridge is independent of frequency. Capacitance C24 can be taken into account in the calibration of Cs, and capacitance C23 can be measured and its shunting effect across the unknown impedance can be calculated. Shielding, which is used at audio frequencies, becomes more necessary as the frequency and impedance levels are increased. Guard circuits (Fig. 15.4.14) are often used at critical circuit points to prevent leakage currents from causing measurement errors. In an unguarded circuit surface leakage current may bypass the resistor R and flow through the detector G, thereby giving an erroneous reading. If a guard ring surrounds the positive terminal

FIGURE 15.4.13 Schematic circuit of shielded and grounded bridge.

FIGURE 15.4.14 Leakage current in guarded circuit. (Leeds and Northrup)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.35

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.35

post (as in the circuit of Fig. 15.4.14), the surface leakage current flows through the guard ring and a noncritical return path to the voltage source. A true reading is obtained since only the resistor current flows through the detector. Coaxial leads and twisted-wire pairs may be used in connecting impedances to a bridge arm in order to minimize spurious-signal pickup from electrostatic and electromagnetic fields. It is important to keep lead lengths short, especially at high frequencies.

BRIDGE DETECTORS AND AMPLIFIERS Galvanometers are used for null decision in dc bridges. The permanent-magnet moving-coil d’Arsonval galvanometer is widely used. The suspension provides a restoring torque so that the coil seeks a zero position for zero current. A mirror is used in the sensitive suspension-type galvanometer to reflect light from a fixed source to a scale. This type of galvanometer is capable of sensitivities on the order of 0.001 mA per millimeter scale division but is delicate and subject to mechanical disturbances. Galvanometers for portable instruments generally have indicating pointers and use taut suspensions which are less sensitive but more rugged and less subject to disturbances. Sensitivities are typically in the range of 0.5 mA per millimeter scale division. Galvanometers exhibit a natural mechanical frequency which depends on the suspension stiffness and the moment of inertia. Overshoot and oscillatory behavior can be avoided without an excessive increase in response time if an external resistance of the proper value to produce critical damping is connected across the galvanometer terminals. Null-detector amplifiers incorporating choppers or modulators (see Chap. 15.3) are used to amplify the null output signal from dc bridges to provide higher sensitivity and permit the use of rugged, less-sensitive microammeter indicators. Null-detector systems such as the L&N 9838 are available with sensitivities of 10 nV per division for a 300-Ω input impedance. The Guideline 9460A nanovolt amplifier uses a light-beam-coupled amplifier that can provide 7.5-mm deflection per nV when used with a sensitive galvanometer. The input signal polarity to this amplifier may be reversed without introducing errors because of hysteresis or input offset currents. This reversal-capability is useful to balance out parasitic or thermal emfs in the measured circuit. Frequency-selective amplifiers are extensively used to increase the sensitivity of ac bridges. An ac amplifier with a twin-T network in the feedback loop provides full amplification at the selected frequency but falls off rapidly as the frequency is changed. Rectifiers or phase-sensitive detectors are used to convert the amplified ac signal into a direct current to drive a dc microammeter indicator. The General Radio 1232-A tuned amplifier and null detector, which is tunable from 20 Hz to 20 kHz with fixed-tuned frequencies of 50 and 100 kHz, provides a sensitivity better than 0.1 mV. Cathode-ray-tube displays using Lissajous patterns are also used to indicate the deviation from null conditions. Amplifier and detector circuits are described in Chap. 15.3.

MISCELLANEOUS MEASUREMENT CIRCUITS Multifrequency LCR meters incorporate microprocessor control of ranging and decimal-point positioning, which permits automated measurement of inductance, capacitance, and resistance in less than 1 s. Typical of the new generation of microprocessor instrumentations is the General Radio 1689 Digibridge, which automatically measures a wide range of L, C, R, D, and Q values from 12 Hz to 100 kHz with a basic accuracy of 0.02 percent. This instrument compares sequential measurements rather than obtaining a null condition and therefore is not actually a bridge. Similar performance is provided by the Hewlett-Packard microprocessor-based 4274A (100 Hz to 100 kHz) and 4275A (10 kHz to 10 MHz) multifrequency LCR meters, which measure the impedance of the device under test at a selected frequency and compute the value of L, C, R, D, and Q as well as the impedance, reactance, conductance, susceptance, and phase angle with a basic accuracy of 0.1 percent.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.36

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.36

MEASUREMENT SYSTEMS

FIGURE 15.4.15 Q meter. (Hewlett-Packard )

The Q meter is used to measure the quality factor Q of coils and the dissipation factor of capacitors; the dissipation factor is the reciprocal of Q. The Q meter provides a convenient method of measuring the effective values of inductors and capacitors at the frequency of interest over a range of 22 kHz to 70 MHz. The simplified circuit of a Q meter is shown in Fig. 15.4.15, where an unknown impedance of effective inductance L′ and effective resistance r′ is being measured. A sinusoidal voltage e is injected by the transformer secondary in series with the circuit containing the unknown impedance and the tuning capacitor C. The transformer secondary has an output impedance of approximately 1 mΩ. Unknown capacitors can be measured by connecting them between the HI and GND terminals while using a known standard inductor for L′. Either the oscillator frequency or the tuning-capacitor value is adjusted to bring the circuit to approximate resonance, as indicated by a maximum voltage across capacitor C. At resonance XL′ = Xc where XL′ = 2p f L′, Xc = 1/2p fC, L′ is the effective inductance in henrys, C is the capacitance in farads, and f is the frequency in hertz. The current at resonance is I = e/R, where R is the sum of the resistances of the unknown and the internal circuit. The voltage across the capacitor C is VC = IXC = eXC /R, and the indicated circuit Q is equal to VC /e. In practice, the injected voltage e is known for each Q-range attenuator setting and the meter is calibrated to indicate the value of Q. Corrections for residual resistances and reactances in the internal circuit become increasingly important at higher frequencies (see Chap. 15.2). For low values of Q, neglecting the difference between the resonance and the approximate resonance achieved by maximizing the capacitor voltage may result in an unacceptable error. Exact equations are given in Chap. 15.2. The rf impedance analyzer is a microprocessor-based instrument designed to measure impedance parameter values of devices and materials in the rf and UHF regions. The basic measurement circuit consists of a signal source, an rf directional bridge, and a vector voltage ratio detector as shown in Fig. 15.4.16. The measurement source produces a selectable 1-MHz to 1-GHz sinusoidal signal using frequency synthesizer techniques. The unknown impedance Zx is connected to a test port of an rf directional bridge having resistor values Z equal to the 50-Ω characteristic impedance of the measuring circuit. The test-channel and reference-channel signal frequencies are converted to 100 kHz by the sampling i.f. converters in order to improve the accuracy of vector ratio detection. The vector ratio of the test channel and reference channel i.f. signals is detected for both the real and imaginary component vectors. The vector ratio, which is equal to e2 /e1, is proportional to the reflection coefficient Γ, where Γ = (Z − Zx)/(Z + Zx). The microprocessor computes values of L, C, D, Q, R, X, q, G, and B from the real and imaginary components Γx and Γy of the reflection coefficient Γ. The basic accuracy of the magnitude of Γ is better than 1 percent, while that of other parameters is typically better than 2 percent. The twin-T measuring circuit of Fig. 15.4.17 is used for admittance measurements at radio frequencies. This circuit operates on a null principle similar to a bridge circuit, but it has an advantage in that one side of the oscillator and detector are common and therefore can be grounded. The substitution method is used with this circuit, and therefore the effect of stray capacitances is minimized. The circuit is first balanced to a null condition

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:07 AM

Page 15.37

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.37

FIGURE 15.4.16 Rf impedance analyzer. (Hewlett-Packard)

with the unknown admittance Gx + jBx unconnected. GL = ω 2 RC1C2 (1 + Co /C3 ) L = 1 / [ω 2 (Cb + C1 + C2 + C1C2 /C3 )] The unknown admittance is connected to terminals a and b, and a null condition is obtained by readjusting the variable capacitors to values C′a and C′b The conductance Gx and the susceptance Bx of the unknown are proportional to the changes in the capacitance settings: Gx = ω 2 RC1C2 (Ca′ − Ca )/C3

Bx = ω (Cb − Cb′ )

Measurement of Coefficient of Coupling. Two coils are inductively coupled when their relative positions are such that lines of flux from each coil link with turns of the other coil. The mutual inductance M in henrys can be measured in terms of the voltage e induced in one coil by a rate of change of current di/dt in the other coil; M = −e1/(di2/dt) = −e2 /(di1/dt). The maximum coupling between two coils of self-inductance L1 and L2 exists when all the flux from each of the coils links all the turns of the other coil; this condition produces the maximum value of mutual inductance, Mmax = √L1L2. The coefficient of coupling k is defined as the ratio of the actual mutual inductance to its maximum value; k = M/√L1L2. The value of mutual inductance can be measured using Campbell’s mutual-inductance bridge. Alternately, the mutual inductance can be measured using a selfinductance bridge. When the coils are connected in series with the mutual-inductance emf aiding the selfinductance emf (Fig. 15.4.18a), the total inductance La = FIGURE 15.4.17 Twin-T measuring circuit. (General L1 + L2 + 2M is measured. With the coils connected with Radio Co.)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.38

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.38

MEASUREMENT SYSTEMS

FIGURE 15.4.18 Mutual inductance connected for self-inductance measurement: (a) aiding configuration; (b) opposing configuration.

the mutual-inductance emf opposing the self-inductance emf (Fig. 15.4.18b), inductance Lb = L1 + L2 − 2M is measured. The mutual inductance is M = (La − Lb)/4. Permeameters are used to test magnetic materials. By simulating the conditions of an infinite solenoid, the magnetizing force H can be computed from the ampere-turns per unit length. When H is reversed, the change in flux linkages in a test coil induces an emf whose time integral can be measured by a ballistic galvanometer. The Burrows permeameter (Fig. 15.4.19) uses two magnetic specimen bars, S1 and S2, usually 1 cm in diameter and

FIGURE 15.4.19 Burrows permeameter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.39

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.39

30 cm long, joined by soft-iron yokes. High precision is obtainable for magnetizing forces up to 300 Oe. The currents in magnetizing windings M1 and M2 and in compensating windings A1, A2, A3, and A4 are adjusted independently to obtain uniform induction over the entire magnetic circuit. Windings A1, A2, A3, and A4 compensate for the reluctance of the joints. The reversing switches are mechanically coupled and operate simultaneously. Test coils a and c each have n turns, while each half of the test coil b has n/2 turns. Coils a and b are connected in opposing polarity to the galvanometer when the switch is in position b, while coils a and c are opposed across the galvanometer for switch position c. Potentiometer P1 is adjusted to obtain the desired magnetizing force, and potentiometers P2 and P3 are adjusted so that no galvanometer deflection is obtained on magnetizing current reversal with the switches in either position b or c. This establishes uniform flux density at each coil. The switch is now set at position a and the galvanometer deflection d is noted when the magnetizing current is reversed. The values of B in gauss and H in oersteds can be calculated from H= where N I l d k R a A n

0.4π NI l

B = 108

dkR A − a − H 2an a

= turns of coil M1 = current in coil M1 (A) = length of coil M1 (cm) = galvanometer deflection = galvanometer constant = total resistance of test coil a circuit = area of specimen (cm2) = area of test coil (cm2) = turns in test coil a

The term (A − a)H/a is a small correction term for the flux in the space between the surface of the specimen and the test coil. Other permeameters such as the Fahy permeameter, which requires only a single specimen, the SandfordWinter permeameter, which uses a single specimen of rectangular cross section, and Ewing’s isthmus permeameter, which is useful for magnetizing forces as high as 24,000 G, are discussed in Harris. The frequency standard of the National Institute of Standards and Technology (formerly the National Bureau of Standards) is based on atomic resonance of the cesium atom and is accurate to 1 part in 1013. The second is defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyper-fine levels of the ground state of the atom of cesium 133. Reference frequency signals are transmitted by the NIST radio stations WWV and WWH at 2.5, 5, 10, and 15 MHz. Pulses are transmitted to mark the seconds of each minute. In alternate minutes during most of each hour, 500- or 600-Hz audio tones are broadcast. A binary-coded-decimal time code is transmitted continuously on a 100-Hz subcarrier. The carrier and modulation frequencies are accurate to better than 1 part in 1011. These frequencies are offset by a known and stable amount relative to the atomic-resonance frequency standard to provide “Coordinated Universal Time” (UTC), which is coordinated through international agreements by the International Time Bureau. UTC is maintained within ±0.9 s of the UT1 time scale used for astronomical measurements by adding leap seconds about once per year to UTC, depending on the behavior of the earth’s rotation. Quartz-crystal oscillators are used as secondary standards for frequency and time-interval measurement. They are periodically calibrated using the standard radio transmissions. Frequency measurements can be made by comparing the unknown frequency with a known frequency, by counting cycles over a known time interval, by balancing a frequency-sensitive bridge, or by using a calibrated resonant circuit. Frequency-comparison methods include using Lissajous patterns on an oscilloscope and heterodyne measurement methods. In Fig. 15.4.20, the frequency to be measured is compared with a harmonic of the 100-kHz reference oscillator. The difference frequency lying between 0 and 50 kHz is selected by the low-pass filter and

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.40

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.40

MEASUREMENT SYSTEMS

FIGURE 15.4.20 Heterodyne frequency-comparison method.

compared with the output of a calibrated audio oscillator using Lissajous patterns. Alternately, the difference frequency and the audio oscillator frequency may be applied to another detector capable of providing a zero output frequency. Digital frequency meters provide a convenient and accurate means for measuring frequency. The unknown frequency is counted for a known time interval, usually 1 or 10 s, and displayed in digital form. The time interval is derived by counting pulses from a quartz-crystal oscillator reference. Frequencies as high as 50 MHz can be measured by using scalers (frequency dividers). Frequencies as high as 110 GHz are measured using heterodyne frequency-conversion techniques. At low frequencies, for example 60 Hz, better resolution is obtained by measuring the period T = 1/f . A counter with a built-in computer is available, which measures the period at low frequencies and automatically calculates and displays the frequency. A frequency-sensitive bridge can be used to measure frequency to an accuracy of about 0.5 percent if the impedance elements are known. The Wien bridge of Fig. 15.4.21 is commonly used, R3 and R4 being identical slide-wire resistors mounted on a common shaft. The equations for balance are f = 1/2π √R3R4C3C4 and R1/R2 = R4/R3 + C3/C4. In practice, the values are selected so that R3 = R4, C3 = C4, and R1 = 2R2. Slide wire r, which has a total resistance of R1/100, is used to correct any slight tracking errors in R3 and R4. Under these conditions f = 1/2pR4C4. A filter is needed to reject harmonics if a null indicator is used since the bridge is not balanced at harmonic frequencies. Time intervals can be measured accurately and conveniently by gating a reference frequency derived from a quartz-crystal oscillator standard to a counter during the time interval to be measured. Reference frequencies of 10, 1, and 0.1 MHz, derived from a 10-MHz oscillator, are commonly used. Analog frequency circuits that produce an analog output proportional to frequency are used in control systems and to drive frequency-indicating meters. In Fig. 15.4.22, a fixed amount of charge proportional to C1(E − 2d ), where d is the diode-voltage drop, is withdrawn through diode D1 during each cycle of the input. The current

FIGURE 15.4.21 Wien frequency bridge.

FIGURE 15.4.22 Frequency-to-voltage converter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.41

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS

15.41

FIGURE 15.4.23 Real-time analyzer using 30 attenuators and filters. (General Radio Co.)

through diode D1, which is proportional to frequency, is balanced by the current through resistor R, which is proportional to eout. Therefore, eout = fRC1(E − 2d). Temperature compensation is achieved by adjusting the voltage E with temperature so that the quantity E − 2d is constant. Frequency analyzers are used for measuring the frequency components and analyzing the spectra of acoustic noise, mechanical vibrations, and complex electric signals. They permit harmonic and intermodulation distortion components to be separated and measured. A simple analyzer consists of a narrow-bandwidth filter, which can be adjusted in frequency or swept over the frequency range of interest. The output amplitude in decibels is generally plotted as a function of frequency using a logarithmic frequency scale. Desirable characteristics include wide dynamic range, low distortion, and high stop-band attenuation. Analog filters that operate at the frequency of interest exhibit a constant bandwidth, for example, 10 Hz. The signal must be averaged over a period inversely proportional to the filter bandwidth if the reading is to be within given confidence limits of the long-time average value. Real-time frequency analyzers are available which perform 1/3-octave spectrum analysis on a continuous real-time basis. The analyzer of Fig. 15.4.23 uses 30 separate filters each having a bandwidth of 1/3 octave to achieve the required speed of response. The multiplexer sequentially samples the filter output of each channel at a high rate. These samples are converted into a binary number by the A/D converter. The true rms values for each channel are computed from these numbers during an integration period adjustable from 1/8 to 32 s and stored in the memory. The rms value for each channel is computed from 1,024 samples for integration periods of 1 to 32 s. Real-time analyzers are also available for analyzing narrow-bandwidth frequency components in real time. The required rapid response time is obtained by sampling the input waveform at 3 times the highest frequency of interest using an A/D converter and storing the values of a large number of samples in a digital memory. The frequency components can be calculated in real time by a microprocessor using fast Fourier transforms. Time-compression systems can be used to preprocess the input signal so that analog filters can be used to analyze narrow-bandwidth frequency components in real time. The time-compression system of Fig. 15.4.24 uses a recirculating digital memory and a D/A converter to provide an output signal having the same waveform as the input with a repetition rate which is k times faster. This multiplies the output-frequency spectrum by a factor of k and reduces the time required to analyze the signal by the same factor. The system operates as follows. A new sample is entered into the circulating memory through gate A during one of each k shifting periods. Information from the output of the memory recirculates through gate B during the remaining k − 1 periods. Since information experiences k shifts between the addition of new samples in a memory of length k − 1, each new sample p is entered directly behind the previous sample p − 1, and therefore the correct order is preserved. (k − 1)/n seconds is required to fill an empty memory, and thereafter the oldest sample is discarded when a new sample is entered.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.42

BRIDGE CIRCUITS, DETECTORS, AND AMPLIFIERS 15.42

MEASUREMENT SYSTEMS

FIGURE 15.4.24 Time-compression system.

Frequency synthesizers/function generators provide sine wave, square wave, triangle, ramp, or pulse voltage outputs, which are selectable over a wide frequency range and yet have the frequency stability and accuracy of a crystal oscillator reference. They are useful for providing accurate reference frequencies and for making measurements on filter networks, tuned circuits, and communications equipment. High-precision units feature up to 11-decade digital frequency selection and programmable linear or logarithmic frequency sweep. A variety of units are available, which cover frequencies from a fraction of a hertz to tens of gigahertz. Many synthesizers use the indirect synthesis method shown in Fig. 15.4.25. The desired output frequency is obtained from a voltage-controlled oscillator, which is part of a phase-locked loop. The selectable frequency divider is set to provide the desired ratio between the output frequency and the crystal reference frequency. Fractional frequency division can be obtained by selecting division ratio R alternately equal to N and N + 1 for appropriate time intervals. Time-domain reflectometry is used to identify and locate cable faults. The cable-testing equipment is connected to a line in the cable and sends an electrical pulse that is reflected back to the equipment by a fault in the cable. The original and reflected signals are displayed on an oscilloscope. The type of fault is identified by the shape of the reflected pulse, and the distance is determined by the interval between the original and reflected pulses. Accuracies of 2 percent are typical. A low-frequency voltmeter using a microprocessor has been developed that is capable of measuring the true rms voltage of approximately sinusoidal inputs at voltages from 2 mV to 10 V and frequencies from 0.1 to 120 Hz. A combination of computer algorithms is used to implement the voltage- and harmonic-analysis functions. Harmonic distortion is calculated using a fast Fourier transform algorithm. The total autoranging, settling, and measurement time is only two signal periods for frequencies below 10 Hz.

FIGURE 15.4.25 Indirect frequency synthesis.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.43

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 15.5

AC IMPEDANCE MEASUREMENT Ramon C. Lebron

The generation of Impedance Gain and Phase versus frequency plots of an electrical circuit (passive or active) is extremely important to characterize small signal behavior and evaluate stability. In space dc power distribution systems, such as the International Space Station power system, each of the power system components and dc-to-dc converters has to meet strict input and output impedance requirements over a specific frequency range to ensure end-to-end system stability. Figures 15.5.1 and 15.5.2 show the technique used to measure ac impedance. This test can be performed with the device under test (DUT) operating at rated voltage and rated load. The network analyzer output provides a sinusoidal signal with a frequency that will vary over the desired range. The signal is amplified and fed into the primary of an audio isolation transformer. Figure 15.5.1 shows the method of ac voltage injection by connecting the transformer secondary in series with the power source. The voltage and current amplifiers are ac coupled and the network analyzer is set to generate the magnitude and phase plot of Channel 2 (ac voltage) divided by Channel 1 (ac). Therefore the impedance magnitude plot is |Z| = |Vac|/|Iac| and the impedance phase plot is qZ = qvac − qiac for the required frequency values. Figure 15.5.2 shows the method of ac injection where a capacitor and a resistor are connected in series with the secondary of the transformer. The transformer and RC series combination is connected in parallel with the terminals of the DUT to inject a small ac into the DUT. The network analyzer performs the same computations for |Z| and qz at the desired frequency range. The voltage injection method is used for high-impedance measurements such as the input impedance of a dc-to-dc converter, and the current injection method is better suited for low-impedance measurements such as the output impedance of a dc-to-dc converter.

15.43 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_15.qxd

10/28/04

11:08 AM

Page 15.44

AC IMPEDANCE MEASUREMENT 15.44

MEASUREMENT SYSTEMS

FIGURE 15.5.1 Impedance measurement with voltage injection method.

FIGURE 15.5.2 Impedance measurement with current injection method.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_16.qxd

10/27/04

11:22 AM

Page 16.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 16

ANTENNAS AND WAVE PROPAGATION An antenna can be thought of as the control unit between a source and some medium that will propagate an electromagnetic wave. In the case of a wire antenna, it can be seen as a natural extension of the two-wire transmission line, and, in a similar way, a horn antenna can be considered a natural extension of the waveguide that feeds the horn. The radiation properties of antennas are covered in Chap. 16.1. They include gain and directivity, beam efficiency, and radiation impedance. Depending on the antenna application, certain parameters may be more important. For example, in the case of receiving antennas that measure noise signals from an extended source, beam efficiency is an important measure of its performance. Chapter 16.2 examines the various types of antennas, including simple wire antennas, waveguide antennas useful in aircraft and spacecraft applications, and low-profile microstrip antennas. Finally, Chap. 16.3 treats the propagation of electromagnetic waves through or along the surface of the earth, through the atmosphere, and by reflection or scattering from the ionosphere or troposphere. More details of propagation over the earth through the nonionized atmosphere and propagation via the ionosphere are covered on the accompanying CD-ROM. D.C.

In This Section: CHAPTER 16.1 PROPERTIES OF ANTENNAS AND ARRAYS ANTENNA PRINCIPLES REFERENCES

16.3 16.3 16.16

CHAPTER 16.2 TYPES OF ANTENNAS WIRE ANTENNAS WAVEGUIDE ANTENNAS HORN ANTENNAS REFLECTOR ANTENNAS LOG-PERIODIC ANTENNAS SURFACE-WAVE ANTENNAS MICROSTRIP ANTENNAS REFERENCES

16.18 16.18 16.22 16.25 16.32 16.35 16.37 16.39 16.43

CHAPTER 16.3 FUNDAMENTALS OF WAVE PROPAGATION INTRODUCTION: MECHANISMS, MEDIA, AND FREQUENCY BANDS REFERENCES ON THE CD-ROM

16.47 16.47 16.64 16.66

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_16.qxd

10/27/04

11:22 AM

Page 16.2

ANTENNAS AND WAVE PROPAGATION

On the CD-ROM: Kirby, R. C., and K. A. Hughes, Propagation over the Earth Through the Nonionized Atmosphere, reproduced from the 4th edition of this handbook. Kirby, R. C., and K. A. Hughes, Propagation via the Ionosphere, reproduced from the 4th edition of this handbook.

16.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_16.qxd

10/27/04

11:22 AM

Page 16.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 16.1

PROPERTIES OF ANTENNAS AND ARRAYS William F. Croswell

ANTENNA PRINCIPLES The radiation properties of antennas can be obtained from source currents or fields distributed along a line or about an area or volume, depending on the antenna type. The magnetic field H can be determined from the vector potential as H=

1 ∇× A µ

(1)

To determine the form of A first consider an infinitesimal dipole of length L and current I aligned with the z axis and placed at the center of the coordinate system given in Fig. 16.1.1. A = z[ µ IL exp(− jkr )]/4π r

(2)

where k = 2p/l, and r is the radial distance away from origin in Fig. 16.1.1. From Eqs. (1) and (2) and Maxwell’s equations, the fields of a short current element are Hφ =

jkIL sinθ  1  1+ exp ( − jkr )  4π r jkr  

Eθ =

1 1  jkLI η sinθ  1+ − 2 2  exp ( − jkr )  4π r jkr k r  

Er =

 ILη 1  cosθ  1 + exp(− jkr ) 2π r jkr  

(3)

when η = µ/e and e = permitivity of source medium. By superposition, these results can be generalized to the vector of an arbitrary oriented volume-current density J given by A( x , y, z ) =

µ 4π

∫r′ J(x ′, y′, z ′)

exp(− jkR) dx ′ dy ′ dz ′ R

(4)

For a surface current, the volume-current integral Eq. (4) reduces to a surface integral of Js[exp (−jkR)]/R, and for a line current reduces to a line integral of I [exp(−jkR)]/R. The fields of all physical antennas can be 16.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_16.qxd

10/27/04

11:22 AM

Page 16.4

PROPERTIES OF ANTENNAS AND ARRAYS 16.4

ANTENNAS AND WAVE PROPAGATION

FIGURE 16.1.1 Spherical coordinate system with unit vectors.

FIGURE 16.1.2 Equivalent aperture plane for far-field calculations: M = 2Es × nˆ . J = 2ˆn × Hs′, and J = n × H′s, M = E′s × nˆ .

obtained from the knowledge of J alone. However, in the synthesis of antenna fields the concept of a magnetic volume current M is useful, even though the magnetic current is physically unrealizable. In a homogeneous medium the electric field can be determined by E=

1 ∇×F e

F=

e 4π

∫ r′ M (x ′, y′, z ′)

exp (− jkR) dx ′ dy′ dz ′ R

(5)

Examples of antennas that have a dual property are the thin dipole in free space and the thin slot in an infinite ground plane. The fields of an electric source J can be determined using Eqs. (1) and (5) and Maxwell’s equations. From the far-field conditions and the relationships between the unit vectors in the rectangular and spherical coordinate systems, the far fields of an electric source J are

η HθJ =

− jnk exp ( − jkr ) ∫ v (J x′ cosθ cos φ 4π r + J y′ cosθ sin φ − J z ′ sin θ ) exp[ jk ( x ′ sin θ cos φ

(6)

+ y ′ sin θ sin φ + z ′ cosθ )] dx ′ dy ′ dz ′ jηk exp(− jkr ) ∫v′ (Jx′ sin φ 4ηr − J y′ cos φ ) exp [ jk ( x ′ sinθ cos φ

−ηHθJ = EφJ =

(7)

+ y′ sin θ sin φ + z ′ cosθ ] dx ′ dy′ dz ′ In a similar manner the radiated far fields from a magnetic current M are − jk exp(− jkr ) ∫ v′ ( M y′ cos φ 4π r − M x ′ sin φ ) exp[ jk ( x ′ sin θ cos φ

ηHφM = EθM =

+ y′ sin θ sin φ + z ′ cosθ ] dx ′ dy′ dz ′

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(8)

Christiansen_Sec_16.qxd

10/27/04

11:22 AM

Page 16.5

PROPERTIES OF ANTENNAS AND ARRAYS PROPERTIES OF ANTENNAS AND ARRAYS

− jk exp (− jkr ) ∫v′ ( M x′ cos φ cosθ 4π r + M y′ sin φ cosθ − M z ′ sin θ ) exp [ jk ( x ′ sin θ cos φ

16.5

ηHθM = EφM =

(9)

+ y′ sin θ sin φ + z cosθ ] dx ′ dy′ dz ′ Currents and Fields in an Aperture For aperture antennas such as horns, slots, waveguides, and reflector antennas, it is sometimes more convenient or analytically simpler to calculate patterns by integrating the currents or fields over a fictitious plane parallel to the physical aperture than to integrate the source currents. Obviously, the fictitious plane can be chosen to be arbitrarily close to the aperture plane. If the integration is chosen to be an infinitesimal distance away from the aperture plane, the fields to the right of s′ in Fig. 16.1.2 can be found using either of the equivalent currents M s′ = 2Es′ × nˆ

(10a)

J s′ = 2 ˆn × H s′

(10b)

J s′ = nˆ × H s′

and M s′ = −nˆ × Es′

(10c)

The combined electric and magnetic current given in Eq. (10c) is the general Huygens’ source and is generally useful for aperture problems where the electric and magnetic fields are small outside the aperture: In limited cases, the waveguide without a ground plane, a small horn, and a large tapered aperture can be approximated this way. Far Fields of Particular Antennas From the field equations stated previously or coordinate transformations of these equations, the far-field pattern of antennas can be determined when the near-field or source currents are known. Approximate forms of these fields or currents can often be estimated, giving good pattern predictions for practical purposes. Electric Line Source Consider an electric line source (current filament) of length L centered on the z′ axis of Fig. 16.1.1 with a time harmonic-current I(z′)e jwt. The fields of this antenna are, from Eq. (6), Eθ =

jnk sinθ exp( − jkr ) 4π r

L/2

∫ −L/2

I ( z ′) exp[− jkz ′ cosθ )]dz ′ Eφ = 0

For the short dipole where kL 15 to 55

>55 to 160

>160 to 3500

2000 to 8000 4% 0.2%

2000 to 8000 5% 0.2%

4000 to 20,000 7.5% 0.2%

6000 to 20,000 16% 0.2%

15,000 to 30,000 not specified 0.2%

0.03%

0.02%

0.02%

0.02%

0.01%

An errored block is a block in which 1 or more bits are in error. An errored second is a 1-s period with one or more errored blocks. A severely errored second is a 1-s period which contains 30 percent or more errored blocks, or four contiguous blocks each of which has more than 1 percent of its bits in error, or a period of loss of signal. A background errored block is an errored block not occurring as part of a severely errored second.

*An

additional requirement based on a one-minute interval will likely be deleted as redundant in practice.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.26

TRANSMISSION SYSTEMS 17.26

TELECOMMUNICATIONS

enough (perhaps 2 s), the channel bank causes the trunks it serves to be declared “busy,” lights a red alarm light, and sounds an office alarm. It also transmits a special code that causes the other channel bank to take the trunks out of service, light a yellow alarm light, and sound an alarm. Maintenance personnel then clear the trouble, perhaps by patching in a spare T1 line, and proceed with fault location and repair. Channel banks can be checked by looping, i.e., connecting digital output and input. Repeatered line-fault location techniques were discussed earlier. In higher-speed systems, automatic line and multiplex protection switching is often provided. A typical line-protection switch monitors and removes violations of the redundancy rules of the line signal on the working line at the receiving (tail) end. When violations in excess of the threshold are detected, a spare line is bridged on at the transmitting (head) end and if a violation-free signal is received on this line, the tail-end switch to spare is completed. If the spare line also has violations, there is probably an upstream failure and no switch is performed. A multiplex-protection switch is typically based on a time-shared monitor that evaluates each of several multiplexers and demultiplexers in turn by pulse-by-pulse comparison of the actual output with correct output based on current input. Multiplex monitors also usually check the incoming high- and low-speed signals using the line-code redundancy. As the digital network has grown, there has been an increasing use of maintenance centers, to which all alarms in a geographic region are remoted, and which are responsible for dispatching and supervising maintenance personnel, directing restoration of failed facilities over other facilities or routes, and rearrangements for other purposes as well. There is also increasing provision, in network design, of alternative routes, sometimes by routing lines to form a ring, so that there are two physically separate paths between any two offices. Synchronous Digital Hierarchy and SONET. The existing plesiochronous digital network has grown piecemeal over decades, with the parameters of new systems reflecting the technology and needs at the time of their development. In the late 1980s, a worldwide effort brought forth a new hierarchy for higher rate systems to provide capabilities not possible in the existing network. In this new hierarchy, multiplexing is by interleaving of 8-b bytes, as in the primary rate multiplexes, as opposed to the bit interleaving used elsewhere in the existing network. Further, similar formats are used for multiplexing at all levels, and it is intended that new transmission systems will be at hierarchical levels. Another important feature of the new hierarchy is an overall plan for monitoring and controlling a complex network, and the inclusion of enough overhead in the formats to support it. In spite of the name, the new hierarchy allows nonsynchronous signals based on the existing hierarchy to enter, and multiplexing throughout includes enough justification capability to accommodate the small frequency deviations characteristics of reference clocks. The new hierarchy starts at 51.84 Mb/s, and all higher rates are an integral multiple of this lowest rate. Multiples up to 255 have been envisioned, and structures for several rates have been standardized within the United States. The rates of most interest are shown in Table 17.1.8. A single frame of the STS-1 signal consists of 810 bytes, as shown in Fig. 17.1.10. The bytes appear on the transmission line read from left to right, starting with the first row. The transported signal occupies 774 bytes of the frame, with the remainder of the frame dedicated to overhead. The transported signal plus the path overhead, the payload, are intended to be transported across the network without alteration as the signal is multiplexed to, and recovered from, higher levels. In order to accommodate frequency and phase variations in the network, the 783 bytes of payload can start anywhere within the 783 byte locations allocated to the payload, and continue into the next frame. The starting point is signaled by a payload pointer included in the line overhead, so the proper alignment can be recovered at the receiving end. This pointer, as well

TABLE 17.1.8

Major Rates in the Synchronous Digital Hierarchy

Line Rate Mb/s

Designation US (SONET)

Designation CCITT

51.84 155.52 622.08 2488.32

STS-1 STS-3 STS-12 STS-48

STM-1 STM-4 STM-16

Comment Used to carry one 44.736 Mb/s (DS-3) signal Used to carry one 139.254 Mb/s signal Used for fiber optic systems Used for fiber optic systems

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.27

TRANSMISSION SYSTEMS TRANSMISSION SYSTEMS

17.27

FIGURE 17.1.10 An STS-1 frame: (1) Section overhead (9 bytes); (2) line overhead (18 bytes); (3) path overhead (9 bytes); (4) transported signal (774 bytes). Each square represents 1 byte.

as the remainder of the line and section overhead bytes, are provided for the use of multiplex and line equipment, and will normally be changed several times as the frame passes through the network. The path overhead is placed on the signal at the path terminating equipment, where the transported signal is assembled and embedded in the frame. It is not intentionally modified as the STS-1 frame passes through subsequent multiplexes and line systems, but can be read and used at intermediate points. This overhead is, from the point of view of the hierarchy, end-to-end information. It contains signals identifying the structure of the frame to aid in retrieving the embedded signal, status of maintenance indications, such as loss of signal for the opposite direction of transmission, a parity check on the previous frame, as well as provision for a message channel for use by the path terminating equipment. Line overhead information may be inserted or modified when the STS-1 signal is multiplexed to a higher rate, or transferred between higher rate signals. Normally the capability to switch the higher rate signal to a standby facility in case of failure will be provided at such points, so the line overhead includes signaling for coordinating the operation of this protection switching, as well as functionality similar to the path overhead, but for use over the shorter “line.” The section overhead includes the framing alignment pattern by which the frame is located, and functionality similar to that of the path overhead, but for use and modification within individual sections, which end at regenerators or multiplexes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.28

TRANSMISSION SYSTEMS 17.28

TELECOMMUNICATIONS

TABLE 17.1.9

Representative North American Digital Systems for Paired Cable

Line rate, Mb/s

64-kb/s voice channel capacity

1.544

24

T1 T10S (T1-outstate)

Bipolar with 1-in-8 1s density, 15 0s maximum

3.152

48

T1C, T1D, T148

Bipolar, 4B3T; duobinary; modified duobinary

6.443

96

T1G

Widely used designation

Line formats

Quaternary

Usual medium Wire pairs in single cable (but two directions in different units) As for T1, but also with shielded (screened) units As for T1C

Typical repeater spacing, mi

Typical section loss, dB

1 (on 22 gauge)

32

1 (on 22 gauge)

48

48

Frames for higher rate signals are generally similar. An STM-N frame has nine rows, and N × 270 columns of which N × 3 are for section overhead. The rather complex structure is based on virtual containers, tributary units, and administrative units, which are combinations of user signal with the overheads defined above appropriate to the administration of various types of paths. Line Systems for Transmission Systems on Wire Cable. Systems providing trunks on wire pair are generally designed to operate on the same cable types used for voice trunks, and to share such cables with voice trunks. Large numbers of such systems that have characteristics indicated in Table 17.1.9 are in service in North America and Japan, although the fiber systems are increasingly being used in new installations. Wire pair systems at 2.048 Mb/s are common in Europe. All the above are four-wire systems, using one pair for each direction of transmission. Two-wire systems providing 144 kb/s in both directions on a single pair have been specified and developed for use as ISDN loops, but little deployed as yet. These two-wire systems use echo cancelers, or time compression multiplexing in which the pair is used alternately in each direction (at about twice the average bit rate) with buffering at the ends. Systems on Fiber Optic Cable. Fiber-optic systems, operating digitally, and using one fiber for each direction of transmission have developed extremely rapidly since their introduction in 1977, with steadily increasing capacity and, correspondingly, decreased per-channel cost. Systems for trunks and loops have been installed at many of the hierarchical rates (Table 17.1.1), but systems at even higher rates are most prevalent. The characteristics of such systems are summarized in Table 17.1.10, and some specific systems are shown on in Table 17.1.11. A branching unit, including 296 Mb/s regenerative repeaters, used in TAT-8 (Trans ATlantic cable 8) is shown in Fig. 17.1.11. Similar branching units in TAT-9 operate at 591 Mb/s, and include some multiplexing functions as well. All terrestrial and submarine systems have customarily used intermediate regenerators when the system length requires gain between the terminals. Systems using optical amplifiers instead of regenerators have recently appeared, and the characteristics of one of these is also included in Table 17.1.10. In such systems, erbium doped optical amplifiers are used at intermediate points to overcome loss of the dispersion-shifted fiber with regeneration only at the ends. Figure 7.1.12 shows an amplifier designed for the system shown in the table. Typical output power of such amplifiers is +1 to +3 dBm. Even these systems do not come close to exploiting the theoretical capacity of the fibers, and further developments are to be expected. Wavelength-division multiplexing, in which two or more transmitter-receiver pairs operate over a single fiber but at different wavelengths, is one way of tapping this capacity, and has seen limited use. The use of solitrons is being explored, and the record as of early 1993 for simulated long-distance transmission in the laboratory, 20 Gb/s over 13,000 km, used this technology. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.29

TRANSMISSION SYSTEMS TRANSMISSION SYSTEMS

TABLE 17.1.10

Fiber type

850 1300 1300 1550

Graded index Graded index Single mode Single mode

TABLE 17.1.11

Name TAT-10 FT-2000

TAT-12

Parameters of Fiber-Optic Systems

Wavelength, nm

*Lower

17.29

Bit rate, Mb/s

Maximum regenerator spacing, km*

2–140 2–140 140–1700 1.5–2500

15–20 45–60 25–60 50–150

spacings generally correspond to higher bit rates.

Some Fiber-Optic Systems

Primary application

Bit rate per fiber

Wavelength, nm

Repeater spacing, km

Long undersea routes Short and long terrestrial routes Long undersea routes

591.2 Mb/s

1550

110

2488.32 Mb/s

1310 or 1550

60 at 1310 nm 84 at 1550 nm

5 Gb/s

1550

33–45

Comment

A variety of terminals is available Uses optical amplifiers

FIGURE 17.1.11 TAT-8 (transatlantic telephone cable no. 8) branching repeater with cable-laying ship in background. (From AT&T. Used with permission)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.30

TRANSMISSION SYSTEMS 17.30

TELECOMMUNICATIONS

FIGURE 17.1.12 Amplifier pair for TAT-12 (AT&T section).

Terrestrial Radio. Frequencies allocated to telecommunications in the United States are shown in Table 17.1.2. Typical systems for analog signals modulate a carrier using low index FM with a signal consisting of one or more multiplexed mastergroups, and occupy a bandwidth of two or more times 4 kHz for each voice channel. Systems with very linear amplifiers have also used single-sideband AM, with a resulting bandwidth of closet to 4 kHz per voice channel. While analog microwave radio once carried the bulk of long-haul telecommunications in the United States and many other countries, it has been mostly displaced in long-haul applications by optical fiber, particularly in the United States, and in short-haul applications by digital radio, owing to the need to interconnect with digital switches and other digital transmission systems. A block diagram of a digital radio regenerator is shown in Fig. 17.1.8. The regenerator operates at i.f., and a complete station involves frequency conversion to and from rf, as well as receiving and transmitting antennas. An end station uses the transmitting and receiving portions of the regenerator separately to create and demodulate the transmitted and received signals. Communications Satellite Systems. While the first experimental communications satellites were in low earth orbit, commercial satellites have almost uniformly been in a geostationary orbit, 22,300 miles above the equator. (The exceptions were in polar orbits, for better visibility from the northern polar regions.) In such an orbit, the satellite appears stationary from the earth, so a permanent communication link can be established using a single satellite, and earth stations with very directive stationary (or almost stationary) antennas. The disadvantages of this orbit are the high loss and delay resulting from the long distance the signal must travel. Table 17.1.12 lists representative applications of communication satellites, including current proposals for new low earth orbit systems. Communications satellites receive signals from an earth station, and include transponders, which amplify and translate the signal in frequency and retransmit it to the receiving earth station, thus making effective use of the line-of-sight microwave bands without requiring erection of relay towers. The transponders are powered from solar cells, with batteries for periods of eclipse. Spin-stabilized satellites are roughly cylindrical and spin at about 60 r/min, except for a “despun” portion, including the antennas, that is pointed at the earth. Three-axis-stabilized satellites have internal high-speed rotating wheels for stability, and solar cells on appendages which unfold after they are in orbit. Adjustments in the position and orientation of a satellite in orbit are accomplished under control of the telemetry tracking and control (TTC) station on the earth, and the exhaustion of fuel for this purpose is the normal cause of end of life of the satellite, typically 10 to 12 years. (At end of life, the TTC moves the satellites to an unused portion of the orbit, where it remains, an archeological resource for future generations.) High reliability is necessary, and on-board spares for the electronics, switchable from the TTC, at a ratio of 50 to 100 percent, are typically provided. Table 17.1.13 gives the characteristics of two current satellites. Most civilian communication satellites have used the common-carrier bands of 5925 to 6425 MHz in the uplinks, and the 3700 to 4200-MHz band in the downlinks. Now, direct broadcast satellites (DBS) use the 11and 14-GHz bands for down- and uplinks, respectively. Since these bands are not so widely used in terrestrial

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.31

TRANSMISSION SYSTEMS TRANSMISSION SYSTEMS

TABLE 17.1.12

17.31

Representative Applications of Communication Satellites

Application

Type of Service

Technical Characteristics

Status

Intercontinental telephone trunking

Point-to-point, 2-way

Earth station antennas to 30 m. FDMA, TDMA, and FM with a single channel per transponder used

The Intelsat system. Widely used where fiber optic cables are not available, also for handling peak loads on cables, and during repair of failed cables

Intercontinental TV transmission

Point-to-point, 1-way

Analog TV signals using FM with a single channel per transponder

Carried along with voice on the Intelsat satellites. Primary way of providing this service

National telephone trunks

Point-to-point, 2-way

Wide variety of antenna sizes, access methods have been used

No longer used in the United States, primarily because voice users don’t like delay. Still used in countries with difficult terrain, long distances between population centers, or sparse networks

Distribution of TV signals to local broadcast stations or CATV distribution centers

Point-to-multipoint, 1-way

Smaller receiving antennas. Analog TV signals using FM with a single channel per transponder

Major provider of this service. Economics generally favorable compared to cable and microwave radio

Business and educational TV distribution, typically directly to viewing site

Point-to-multipoint, 1-way

Originally analog TV using FM, but increasingly digital, using coders, which, by removing redundancy encode the signal into 6 Mb/s or less, allowing multiple channels per transponder

Major provider of this comparatively new service

Data links, international and domestic

Point-to-point, 2-way

Low rate data channels can be multiplexed to a high rate to fill a transponder, or FDMA or TDMA can be used

Has seen considerable use, as with proper protocols, delay is not a problem in most applications. Fiber optic cables are eroding market

Maritime Mobile telephone

Fixed-point to mobile, 2-way

Operates at 1.5 GHz, with geosynchronous satellite

Via the INMARSAT system, the major modality for ship-to-shore telephony

Paging, short message

Fixed-point to mobile, 1-way

Would operate at 150 MHz with a total bandwidth of 1 MHz,using low-earthorbit satellite

Proposal. Intent is to provide paging and limited message capability to personal receivers

Terrestrial mobile

Fixed-point to mobile, 2-way

Would operate at about 1.5 GHz, with a total bandwidth of about 20 MHz. Would use from 12 to 30 satellites in low earth orbit, using circular polarization and low directivity antennas on the mobile stations

Proposal. Intent is to provide mobile service roughly comparable to cellular, but available without the necessity for local terrestrial construction and network access

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.32

TRANSMISSION SYSTEMS 17.32

TELECOMMUNICATIONS

TABLE 17.1.13

Representative Communications Satellites

microwave relay, interference problems are less although rain attenuation is much higher. All frequencies are subject to sun-transit outage when the satellite is directly between the sun and the receiving earth station, so that the receiving antenna is pointing directly at the noisy sun. This occurs for periods of up to 1/2 h/d for several days around the equinoxes. The effect can be avoided by switching to another distant satellite during the suntransit period. The propagation delay between earth stations is about 0.25 s in each direction for geostationary

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.33

TRANSMISSION SYSTEMS TRANSMISSION SYSTEMS

17.33

FIGURE 17.1.13 Satellite transponder.

satellites. This delay is of no consequence for one-way television transmission, but is disturbing to telephone users. Some satellites use each frequency band twice, once in each polarization. Earth antenna directivity permits reuse of the same frequencies by different satellites as long as satellites are not too close in orbit (2° is the limit for domestic U.S. satellites in the 4- to 6-GHz band). Further frequency reuse is possible in a single satellite by using more directive satellite antennas which direct separate beams to different earth areas. Although Intelsat has made use of spot beams, most present satellite antennas have beam widths covering upward of 1000 mi on earth. A simplified transponder block diagram is given in Fig. 17.1.13. Transponder utilization and multiple access. A transponder can be used for a single signal (single carrier operation) which may be either frequency- or time-division-multiplexed. Such signals have included a single TV signal, two or three 600-channel analog master groups multiplexed together and used to frequency modulate a carrier, thirteen 600-channel master groups using compounders and single-sideband amplitude modulation, and a digital signal with rates up to 14.0 Mb/s used to modulate a carrier using quaternary phaseshift keying (QPSK), or coded octal phase-shift keying. Single-carrier operation can be either point-to-point (as for normal telecommunication) or broadcast (as for distributing TV programs). Transponders can also be used in either of two multiple-access modes in which the same transponder carries (simultaneously) signals from several different earth stations. In frequency-division multiplex access (FDMA) the frequency band of each transponder is subdivided and portions assigned to different earth stations. Each station can then transmit continuously in its assigned frequency band without interfering with the other signals. All earth stations receive all signals but demodulate only signals directed to that station. In the limit of subdivision, one voice channel can be placed on a single carrier (single channel per carrier or SCPC). As the high-power amplifiers (HPA) in the earth station and the satellite are highly nonlinear, power levels must be reduced considerably (“backed off”) below the saturation level to reduce intermodulation distortion between the several carriers. It is also possible to use demand assignment in which a given frequency slot can be reassigned among several earth stations as traffic demands change. In TDMA (time-division multiple access) each earth station uses the entire bandwidth of a transponder for a portion of the time, as illustrated in Fig. 17.1.14. This arrangement implies digital transmission (such as QPSK) with buffer memories at the earth stations to form the bursts. A synchronization arrangement that controls the time of transmission of each station is also required. As at any given time only a single carrier is involved, less backoff is required than with FDMA, allowing an improved signal-to-noise ratio. Demand assignment can be realized by reassigning burst times among the stations in the network. Satellite-switched TDMA (SSTDMA) in which a switch in the satellite routes bursts among spot beams covering different terrestrial areas is a feature of Intelsat VI. Transmission considerations. The free-space loss between a geostationary satellite and the earth is about 200 dB. To overcome this large loss, earth stations for telecommunications trunks have traditionally used large parabolic antennas (10 to 30 m in diameter), high output power (up to several kilowatts), and low-noise receiving amplifiers (cryogenically cooled in some cases). Transponder output power is limited to the power available from the solar cells, and therefore, downlink thermal noise often accounts for most of the system noise with intermodulation in the transponder power amplifier a significant limiting factor. Consequently, the capacity of

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.34

TRANSMISSION SYSTEMS 17.34

TELECOMMUNICATIONS

FIGURE 17.1.14 Satellite time-division multiplex access (TDMA). From Digital Communications Corporation; used by permission.

a satellite channel is often limited by the received signal-to-noise ratio (power-limited) rather than by the bandwidth of the channel. For applications other than high-capacity trunking, the cost of large antennas at the earth stations is often prohibitive, so lower capacity is accepted, and received power may be increased by dedicating more power to the transponder or by use of spot beams, as the economics of the application dictate. Smaller antennas are less directive, possibly causing interference to adjacent satellites, unless the station is receive-only. Therefore VSATs (very small aperture terminals), which may have antennas as small as 1 m, typically operate in the higher frequency bands where directivity of smaller antennas may be adequate. Some applications, including the proposals included in Table 17.1.12, use much lower frequencies, and accept, or even exploit, the lesser directivity.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.35

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 17.2

SWITCHING SYSTEMS Amos E. Joel, Jr.

A telecommunication service that includes directing a message from any input to one or more selected outputs requires a switching system. The terminals are connected to the switching system by loops, which together with the terminals are known as lines. The switching systems at nodes of a network are connected to each other by channels called trunks. This section deals primarily with systems that provide circuit switching, i.e., provision of a channel that is assigned for the duration of a call. Other forms of switching are noted later. Switching systems find application throughout a communication network. They range from small and simple manual key telephone systems or PBXs to the largest automatic local and toll switching systems.

SWITCHING FUNCTIONS Introduction A switching system performs certain basic functions plus others that depend on the type of services being rendered. Generally switching systems are designed to act on each message or call, although there are some switches that perform less often, e.g., to switch spare or alternate facilities. Each function is described briefly here and in greater detail in specific paragraphs devoted to each function. A basic function of a circuit telecommunication switching system is connected by the switching fabric,* the transfer of communication from a source to a selected destination. Vital to this basic function are the additional functions of signaling and control (call processing) (Fig. 17.2.1). Other functions are required to operate, administer, and maintain the system.

Signaling Automatic switching is remote-controlled switching. Transfer of control information from the user to the switching office and between offices requires electrical technology and a format. This is known as signaling, and it is usually a special form of data communication. Voice recognition is also used.

*The term switching fabric will be used in these paragraphs to identify the implementation of the connection function within a switching system. The term communications or switched network will refer to the collection of switching systems and transmission systems that constitute a communications system.

17.35 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.36

SWITCHING SYSTEMS 17.36

TELECOMMUNICATIONS

FIGURE 17.2.1 Basic switching functions in circuit switching.

Originally, signaling was developed to accommodate the type of switching technology used for local switching. Most of these systems used dc electric signals. Later, with the advent of longer distances, signaling using single- and multiple-frequency tones in the voice band was developed. Most recently, signaling between offices using digital signals has been introduced over dedicated networks, distinct from the talking channels. As dialing between more distant countries became feasible, specific international signaling standards were set. These standards were necessarily different from national signaling standards since it was necessary to provide for differences in calling devices (dials) and call handling such as requests for language assistance or restrictions in routing.

Control Control of switching systems and their application is called system control, the overall technique by which a system receives and interprets signals to take the required actions and to direct the switching fabric to carry them out. In the past, the control of switching systems was accomplished by logic circuits. Virtually all systems now employ stored-program control (SPC). By changing and adding to a program one can modify the behavior of a switching system faster and more efficiently than with wired logic control. Switching Fabrics The switching fabric provides the function of connecting channels within a circuit-switching system. Storeand-forward or packet-switching systems do not need complex switching fabrics but do require connecting networks such as a bus structure. Switching systems have generally derived their names or titles from the type of switching technology used in the switching fabric, e.g., step-by-step, panel, and crossbar. These devices constitute the principal connective elements of switching fabrics. Two-state devices that change in electrical impedance are known as crosspoints. Typically electromechanical crosspoints are metallic and go from almost infinite to zero impedance; electronic crosspoints change impedance by several orders of magnitude. The off-to-on impedance ratio must be great enough to keep intelligible signals from passing into other paths in the network (crosstalk). A plurality of crosspoints accessible to or from a common path or link is known as a switch or, for a rectangular array, a switch matrix. A crosspoint may contain more than one gate or contact. The number depends on the information switched and the technology. Generally a number of stages of switches are used to provide a network in order to conserve the total number of crosspoints required. For connecting 100 inputs to 100 outputs, a single switch matrix requires 100 × 100 = 10,000 crosspoints. A two-stage fabric requires only 2000 crosspoints when formed with twenty 10 × 10 matrices. In a two-stage fabric, an output of each first-stage switch is connected to an input of a second-stage switch via a link. There is a connectable path for each and every input to each and every output. Since each input has access to every output, the network is characterized as having full access. However, two paths may not simultaneously exist between two inputs on the same first-stage switch and two outputs of a single-output-stage switch. (There is only one link between any first- and second-stage switch.) A second call cannot be placed, and this network is said to be a blocking network. By making the switches larger and adding links to provide parallel paths, the chance of incurring a blocking condition is reduced or eliminated. A three-stage Clos nonblocking fabric can be designed requiring only 5700 crosspoints. Even fewer crosspoints are needed if existing internal paths can be rearranged to accommodate a new connection that would otherwise encounter blocking. The design of most practical switching fabrics includes a modest degree of blocking in order to provide an economical design. Large central-office switching networks may have more than 100,000 lines and trunks to be interconnected and provide tens of thousands of simultaneous connections. Such networks typically require six to eight stages of switches and are built to carry loads, which result in less than 2 percent of call attempts in the peak traffic period being blocked. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.37

SWITCHING SYSTEMS SWITCHING SYSTEMS

17.37

Network Control While the switching system as a whole requires a control, the control required for a switching fabric may be separated in part or in its entirety from the system control function. The most general network control accepts the address of the input(s) and output(s) for which an interconnection is required and performs all the logic and decision functions associated with the process of establishing (and later releasing) connections. The control for some networks may be common to many switches or individual to each switch. Self-routing is also used in fabrics where terminal addresses are transmitted through and acted on by the switches. Some form of memory is involved with all networks. It may be intimately associated with the crosspoint device employed, e.g., to hold it operated, or it may be separated in a bulk memory. The memory keeps a record of the device in use and of the associated switch path. (In some electronic switching systems it may also designate a path reserved for future use.) Operation, Administration, and Maintenance (OAM) When switching systems are to be used by the public, a high-quality continuous service day in and day out over every 24-h period is required. A system providing such reliable service requires additional functions and features. Examples are continuity of service in the presence of device or component failure and capability for growth while the system is in service. Separate maintenance and administrative functions are introduced into systems to monitor, test, and record and to provide human control of the service-affecting conditions of the system. These functions together with a human input/output (I/O) interface constitute the basic maintenance functions needed to detect, locate, and repair system and component faults. In addition to specific maintenance functions, redundancy in the switching system is usually necessary to provide the desired quality of service. Complete duplication of an active system with a standby system will protect against one or more failures in one system but presents severe recovery problems in the event of a simultaneous failure of both systems. Judicious subdivision of the system into parts that can be reconfigured (e.g., either of a pair of central processors may work with either of a pair of program memories) can greatly increase the ability of the system to continue operation in the presence of multiple faults. Where there are many switching entities in a telecommunications network and as systems have become more reliable and training more expensive, the centralization of maintenance has become a more efficient technique. It ensures better and more continuous use of training and can also provide access to more extensive automated data bases that benefit from more numerous experiences. For public operation, a basic subset of administration and operation features has become accepted as required features. These include the collecting of traffic data, service-evaluation data, and data for call billing.

SWITCHING FABRICS Three different aspects will be considered in the design of switching fabrics: (1) the types of switching fabrics, (2) the technology of the devices, and (3) the topology of their interconnection. Types of Switching Fabrics The three types of switching fabrics are known by the manner in which the message passes through the network. In space-division fabrics analog or digital signals representing messages pass through a succession of operated crosspoints that are assigned to the call for all or most of its duration. In virtual circuit-switching systems previously assigned crosspoints are reoperated and released during successive message segments. In time-division fabrics analog or digital signals representing periodically sampled message segments from a plurality of time multiplexed inputs are switched to the same number of outputs. Using equal length segments assigned in time to time slots identifies them for address purposes in the system control. There are two kinds of time-division switching elements, referred to as space switches and time switches (or time-slot interchanges, TSI). The space switch (also known as a time-multiplexed switch, TMS), shown in Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.38

SWITCHING SYSTEMS 17.38

TELECOMMUNICATIONS

FIGURE 17.2.2 Time-multiplex switch (TMS): space switch.

Fig. 17.2.2, operates like the normal space-switch matrix but with each new time slot the electronic gates are reconfigured to provide a new set of input and output connections. The two-dimensional space switch now has an added third dimension of time. The time-slot interchange uses a buffer memory, into which one frame of input information is stored. Under direction of the contents of the control memory, transfer logic recorders the sequence of information stored in the buffer, as shown in Fig. 17.2.3. To ensure the timely progression of signals through the TSI, two memories are used, one being loaded while the other is being read. The TSI necessarily creates delays in handling the information stream. Also with storage of message (voice) samples in T-stages, delay of at least one frame (e.g., 125 ms) is introduced into transmission by each switch through which the message passes. Channels arriving at the switch in time-multiplexed form can be further multiplexed (and demultiplexed) into frames of greater (or lesser) capacity, i.e., at a higher rate and with more time slots. This function is generally used before using TSI so that channels from different input multiplexes can be interchanged. Time-division switch fabrics are designated by the sequence of time and space stages through which the samples pass, e.g., TSST. The most popular general form of fabric is TST. The choice of others, e.g., STS, is dependent on the size of the fabric and growth patterns. Analog samples can be switched in both directions through bilateral gates. An efficient and accurate transfer of the pulse is effected by a technique known as resonant transfer. For most analog and all digital timedivision networks, however, the two directions of signals to be switched are separated. Therefore two reciprocal connections or what is known as four-wire connections (the equivalent of two wires in each direction) must be established in the network. When connections transmit in only one direction, amplification and other forms of signal processing can more readily be switched into the network.

FIGURE 17.2.3 Time-slot interchange (TSI): time switch.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.39

SWITCHING SYSTEMS SWITCHING SYSTEMS

17.39

If multiplexing is performed in such a way that samples from an incoming circuit can be assigned arbitrarily to any of a number of time slots on an outgoing circuit, time-slot interchange and multiplexing are effectively achieved in a single operation. With the application of digital facilities throughout telecommunications and in particular with the digitalization of speech, digital time-division fabrics are currently the most popular form of switching found in public networks. Digital voice communication exists throughout the public network. The ISDN (see Chap. 17.4) becomes a reality with digital access line interfaces in the local central offices, completing the end-to-end digital capability. As a result, switched 64,000 b/s clear digital channels are now available not only for voice but also for data. The number of time slots provided for in a time-division fabric depends on the speed employed. Typically in a voice-band fabric there may be from 32 to 1024 time slots. The coded information in digitized samples may be sent serially (typically 8 b per sample for voice signals), in parallel, or combinations. Extra bits are sometimes added as they pass through the switch for checking parity, for other signals, or to allow for timing adjustments. For digital transmission, the crosspoints used in S stages of a switching fabric need not be linear. Figure 17.2.4 shows the block diagram of the switching fabric for a no. 4 ESS, a large digital time-division switching system presently being deployed mainly in North America. Incoming digital T carrier streams (five T1 lines with 24 channels each) are further multiplexed to frames of 120 (DS120). The information is buffered in registers to permit synchronization of all inputs. The TSIs on the right side of the figure reverse the order of selecting and buffering; selected input sequences driven by a control memory (not shown) and sequentially gated out of the buffer attain the desired interchange in time. Note that the fabric shown is unilateral (left to right); the complete fabric includes a second unilateral fabric to carry the right-to-left portion of the conversation. This fabric has a maximum of 107,000 input channels, which can accommodate over 47,000 simultaneous conversations with essentially no blocking. When digital time division fabrics are designed to work with digital carrier (T carrier in the United States) systems either in the loop as pair gain systems, line concentrators, or as interoffice trunks, carrier multiplexed bit streams can be synchronized and applied directly to the switch fabrics requiring no demultiplexing. This represents a cost advantage synergy between switching and transmission. Frequency Division. Since frequency-multiplex carrier has been used successfully for transmission, its use for switching has been proposed. Connections are established by assigning the same carrier frequency to the two terminals to be connected. Generally to achieve this requires a tunable modulator and a tunable demodulator to be associated with each terminal, and therefore frequency-division switching has had little practical application. Wave Division switching is a version of frequency division used in optical or photonic transmission and switching. Other forms of photonic switching use true space division to switch optical paths en masse in free space (see Hinton and Miller, 1992).

Switching Fabric Technology Broadly speaking, basically three types of technology have been used to implement switching networks. (1) From the distant past comes the manually operated switch, where wires, generally with plug ends, can be moved within the reach of the operator. (2) Electromechanical switches can be remotely controlled. They may be electromagnetically operated or power-driven. Another classification is by the contact movement distance, gross motion and fine motion. Gross-motion switches inherently have limitations in their operating speeds and tend to provide noisy transmission paths. Consequently, they have seen little recent development. (3) The electronic switch is prevalent in modern design. Electronic Crosspoints. Gross- and fine-motion switches can be used only in space-division systems. Electronic crosspoints achieve much higher operating speeds. Although they can be used in space-, time-, and frequency-division systems, they have the disadvantage of not having as high an open-to-closed impedance ratio as metallic contacts. Steps must therefore be taken to ensure that excessive transmission loss or crosstalk is not introduced into connections. The crosspoint devices are either externally triggered or are self-latching diodes of the four-layer pnpn type. The external trigger may be an electric or optical pulse. The devices have a negative resistance characteristic

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.40

FIGURE 17.2.4 Block diagram of no. 4 ESS digital time-division fabric.

SWITCHING SYSTEMS

17.40 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.41

SWITCHING SYSTEMS SWITCHING SYSTEMS

17.41

and are operated in a linear region if they are to pass analog voice or wideband signals. For fixed-amplitude pulse transmission, as in PCM, the devices need not be operated over a linear region. Electronic crosspoints are generally designed to pass low-level signals at high speed. Recently a new class of high-energy integrated-circuit crosspoints has been developed, which can pass signals used in telephone circuit switching such as ringing and coin control. Switching Fabric Topology Of all the switching functions, the topology and traffic aspects of fabrics have been most amenable to normal analytical treatment, although many less precise engineering and technology considerations are also involved. The simplest fabric is one provided by a single-stage rectangular switch (or, equivalently, a TSI) so that any idle input can reach any idle output. If some of the contacts are omitted, grading has been introduced and not every input can reach every output. With the advent of electronic crosspoints and time division, grading has become less important and will not be pursued further here. When inputs to a rectangular switch exceed the outputs, concentration is achieved; the converse is expansion. A switching fabric is usually arranged in stages. Input lines connect to a concentration stage, several stages of distribution follow, and a last expansion stage connects to trunks or other lines. Within the design of a switching system, provision is usually made for installation of switches in only the quantity required by the traffic and number of inputs and outputs of each particular application. To achieve this, the size of each stage and sometimes the number of stages is made adjustable. Consideration of control, wiring expense, transition methods (for rearranging the system during growth without stopping service), and technology leads to the configurations selected for each system. In order to achieve acceptable blocking in networks that are smaller than their maximum designed size, more parallel paths are provided from one stage to the next. In this case, because the distribution need is also reduced, the connections between stages are rewired so that those switch inputs and outputs which are required for distribution in a large network are used for additional parallel paths instead. It is convenient to divide the fabric into groups of stages or subfabrics according to the direction of the connection. (Calls are considered as flowing from an originating circuit, associated with the request for a connection, to a terminating circuit.) Local interoffice telephone trunks, for example, are usually designed to carry traffic in only one direction. The trunk circuit appearances at a tandem office are then either originating (incoming) or terminating (outgoing). Figure 17.2.5a illustrates such an arrangement where the whole network is unidirectional. Telephone lines are usually bidirectional: they can originate or terminate calls. For control or other design purposes, however, they can be served by unidirectional stages, as shown in Fig. 17.2.5b. Concentration and expansion are normally used with line switching to increase the internal network occupancy above that of lines. In smaller systems a bidirectional network can serve all terminal needs: lines, trunks, service circuits, and so forth (Fig. 17.2.5c). When interconnection between trunks, as in a combined local-tandem office, is required, line stages can be kept bidirectional while trunk stages are unidirectional (Fig. 17.2.5d ). When the majority of trunks are bidirectional, as may occur in a toll office, a bidirectional switching fabric is used (Fig. 17.2.5e). Many other configurations are possible.

SYSTEM CONTROLS Stored Program Control As discussed earlier, most modern systems use some form of general-purpose stored-program control (SPC). Full SPC implies a flexibility of features, within the capability of existing hardware, by changes in the program. SPC system controls generally include two memory sets, one for the program and other semipermanent memory requirements and one for information that changes on a real-time basis, such as progress of telephone calls, or the busy-idle status of lines, trunks, or paths in the switching network. These latter writable memories are call stores or scratch-pad memories. The two memories may be in the same storage medium, in which case there is a need for a nonvolatile backup store such as disc or tape. Sometimes the less frequently used programs are also retrieved from this type of bulk storage when needed.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.42

SWITCHING SYSTEMS 17.42

TELECOMMUNICATIONS

FIGURE 17.2.5 Switching fabrics: (a) unidirectional network (tandem); (b) unidirectional network (local); (c) bidirectional network (local); (d) combined network (local-toll); (e) bidirectional network; line stages: C = concentration, E = expansion, D = distribution; trunk stages: O = outgoing, I = incoming. Arrows indicate direction of progress of setup of call.

Nonprogram semipermanent memory is required for such data as parameters and translations. A switching system is generally designed to cover a range of applications; memory, fabric, and other equipment modules are provided in quantities needed for each particular switching office. Parameters define for the program the actual number of these modules in a particular installation. The translation data base provides relations between signal addresses and physical addresses as well as other service and feature class identification information. Central Control Single Active. The simplest system control in concept is the common or centralized control. Before the advent of electronics, the control of a large telephone switching system generally required up to 10 or more central controls. The application of electronics has made it possible for a system to be fully serviced by a single control. This has the advantage of greatly simplifying the access circuits between the control and the remainder of a switching system. It also presents a single point in the system for introducing additional service capabilities. It has the disadvantage that a complete control must be provided regardless of system size, and complete redundancy is often required so that the system can continue to operate in the presence of a single trouble or while changes are being made in the control.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.43

SWITCHING SYSTEMS SWITCHING SYSTEMS

17.43

Redundancy is usually provided by a duplicate central control that is idle and available as a standby to replace the active unit (the unit actually in control) if it has a hardware failure. The duplicate may carry out each program step in synchronism with the active unit; matching circuits observe both units and almost instantaneously detect the occurrence of a fault in either unit. Otherwise central-control faults can be detected by additional redundant self-checking logic built into each central control unit or by software checks. In these latter modes of operation the central controls may be designed to operate independently and share the workload. Load sharing allows the two (or more) central controls to handle more calls per unit time than would be possible with a single unit. However, in the event of a failure of one unit, the remaining unit(s) must carry on with reduced system capacity. Load-sharing represents independent multiprocessing where two or more processors may have full capability of handling calls and do not depend on each other. At least part of the call store memory (containing, for example, the busy-idle indications of lines) must be accessible, either directly or through another processor, to more than one processor in order to avoid conflicting actions among processors. A small office would require less than the maximum number of processors, so that the control cost is lower for that office; as the office grows, more processors can be added. Increasing the number of processors results in decreasing added capacity per processor. Conflicts on processor access to memory and other equipment modules, with accompanying delays, accelerate with the number of processors. Independent multiprocessing or load sharing rapidly reaches its practical limit. Functional Multiprocessing. Another way to allocate central-control workload is to assign different functions to different processors. Each carries out its task; together they are responsible for total capability of the switching system. This functional or dependent multiprocessing arrangement can also evolve from a single central control. A small office can start with the entire program in one processor. When one or more functional processing units are added, the software is modified and apportioned on a functional basis. As in load sharing, the mutually dependent processors must communicate with each other directly or through common memory stores. In handling calls, each processor may process a portion and hand the next step to a succeeding processor, as in a factory assembly line. This sequential multiprocessing has been used in wired-logic switching systems. Virtually all SPC-dependent multiprocessing arrangements are hierarchical. A master processor assigns the more routine tasks to subsidiary processors and maintains control of system. The one or more subsidiary processors may be centralized or distributed. If the subsidiary processors are centralized, they have full access to network and other peripheral equipment. Distributed controls are dedicated to segments of the switching network and associated signaling circuits. As network and associated signaling equipment modules are added, the control capability is correspondingly enlarged. Most newer switching systems use distributed controls.

TYPES OF SWITCHING SYSTEMS In the preceding paragraphs the various switching functions were described. A variety of switching systems can be assembled using these functions. The choice of system type depends on the environment and the quantity of the services the system is required to provide. Combining the various types of systems within one embodiment is also possible. Circuit Switching Circuit switching is generally used where visual, data, or voice messages must be delivered with imperceptible delay (2d 2/l, where d is the largest dimension of either antenna. Thus, Friis equation applies only when the two antennas are in the far-field of each other. It also shows that the received power falls off as the square or the separation distance r. The power decay as 1/r 2 in wireless systems as exhibited in Eq. (1) is better than the exponential decay in power in a wired link. In actual practice, the value of the received power given in Eq. (1) should be taken as the maximum possible because some factors can serve to reduce the received power in a real wireless system. From Eq. (1), we notice that the received power depends on the product PtGt. The product is defined as the effective isotropic radiated power (EIRP), i.e. EIRP = PtGt

(2)

The EIRP represents the maximum radiated power available from a transmitter in the direction of maximum antenna gain relative to an isotropic antenna. Empirical Path Loss Formula In addition to the theoretical model presented in the preceding section, there are empirical models for finding path loss. Of the several models in the literature, Okumura et al.’s model is the most popular choice for analyzing mobile-radio propagation because of its simplicity and accuracy. The model is based on extensive measurements in and around Tokyo between 200 MHz and 2 GHz, compiled into charts, which can be applied to VHF and UHF mobile-radio propagation. The medium path loss is given by  A + B log10 (r ), for urban area  L p =  A + B log10 (r ) − C , for suburban area   A + B log10 (r ) − D, for open areea

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(3)

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.83

WIRELESS NETWORKS WIRELESS NETWORKS

17.83

FIGURE 17.5.2 Radio propagation over a flat surface.

where r (in kilometers) is the distance between the base and mobile stations, as illustrated in Fig. 17.5.2. The values of A, B, C, and D are given in terms of the carrier frequency f, the base station antenna height hb (in meters), and the mobile station antenna height hm (in meters) as A = 69.55 + 26.16 log10( f ) − 13.82 log10(hb) − a(hm) B = 44.9 − 6.55 log10(hb)

(4a) (4b)

2

  f  C = 5.4 + 2  log10     28    D = 40.94 − 19.33 log10 ( f ) + 4.78[log10( f )]2

(4c) (4d)

where 0.8 − 1.56 log10 ( f ) + [1.1 log10 ( f ) − 0.7]hm , for medium/small city  a(hm ) =  8.28[log10 (1.54 hm )]2 − 1.1, for f ≥ 200 MHz  2 3.2[log10 (11.75hm )] − 4.97, for f < 400 MHz  for large city

(5)

The following conditions must be satisfied before Eq. (3) is used: 150 < f < 1500 MHz; 1 < r < 80 km, 30 < hb < 400 m; 1 < hm < 10 m. Okumura’s model has been found to be fairly good in urban and suburban areas, but not as good in rural areas.

CORDLESS TELEPHONY Cordless telephones first became widespread in the mid-1980s as products became available at an affordable price. The earliest cordless telephone used narrow band technology and used separate channels for frequency channel for transmission to/from the base station. They had limited range, poor sound quality, and poor security—people could easily intercept signals from another cordless phone because of the limited number of channels. The Federal Communications Commission (FCC) granted the frequency range of 47 to 49 MHz for cordless phones in 1986 and the frequency range of 900 MHz in 1990. This improved their interference problem, reduced the power needed to run them, and allowed cordless phones to be clearer, broadcast a longer distance, and choose from more channels. However, cordless phones were still quite expensive.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.84

WIRELESS NETWORKS 17.84

TELECOMMUNICATIONS

The use of digital technology transformed the cordless phone. Digital technology represents the voice as a series of 0s and 1s, just as a CD stores music. Digital cordless phones in the 900-MHz frequency range were introduced in 1994. Digital signals allowed the phones to be more secure and decreased eavesdropping. With the introduction of digital spread spectrum (DSS) in 1995, eavesdropping on the cordless conversations was practically made impossible. The opening up of the 2.4-GHz range by the FCC in 1988 increased the distance over which a cordless phone can operate and further increased security. With the cordless phone components getting smaller, more and more features and functions can be placed in phones without making them any bigger. Such functions may include voice mail, call screening, and placing outside calls. With many appealing features, there continues to be a strong market interest in cordless telephones for residential and private office use. As shown in Fig. 17.5.3, the cordless telephone has gone through an evolution. This started with the 46/49 MHz telephones. Although earlier cordless telephones existed, the 46/49 MHz cordless telephones were the first to be produced in substantial quantities. The second generation used 900-MHz frequency range resulting in longer range. The third generation introduced the spread spectrum telephones in the 900 MHz band. The fourth generation changed from 900-MHz to 2.4-GHz band, which is accepted worldwide. The fifth generation of cordless telephones is emerging now and employs time division multiple access (TDMA).

FIGURE 17.5.3 Evolution of cordless phone.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.85

WIRELESS NETWORKS WIRELESS NETWORKS

17.85

FIGURE 17.5.4 Cordless telephone system configuration.

Basic Features A cordless phone basically combines the features of telephone and radio transmitter/receiver. As shown in Fig. 17.5.4, it consists of two major units: base and handset. The base unit interfaces with the public telephone network through the phone jack. It receives the incoming call through the phone line, converts it to an FM radio signal, and then broadcasts that signal. The handset receives the radio signal from the base, converts it to an electrical signal, and sends that signal to the speaker, where it is converted into the sound wave. When someone talks, the handset broadcasts the voice through a second FM radio signal back to the base. The base receives the voice signal, converts it to an electrical signal, and sends that signal through the phone line to the other party. The base and handset operate on a frequency pair (duplex frequency) that allows one to talk and listen simultaneously.

Types of Cordless Telephone Over the years, several types of cordless have been developed. These include:

• CT1: This first generation cordless telephone was introduced in 1983. It provides a maximum range of about 200 m between handset and base station. It is an analog phone that is primarily designed for domestic use. It employs analog radio and uses eight RF channel and frequency division multiple access (FDMA) scheme.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.86

WIRELESS NETWORKS 17.86

TELECOMMUNICATIONS

TABLE 17.5.1 CT1 Cordless Telephone Duplex Frequencies

• •





Channel number

Base unit transmission frequency (kHz)

Handset transmission frequency (MHz)

1 2 3 4 5 6 7

1642.00 1662.00 1682.00 1702.00 1722.00 1742.00 1762.00

8

1782.00

47.45625 47.46875 47.48125 47.49375 47.50625 47.51875 47.53125 or 47.44375 47.54375

Operation has to be on not more than one of the pair of frequencies shown in Table 17.5.1 at any one time. As the number of users grew, so did the co-channel interference levels, while the quality of the service (customer satisfaction) deteriorated. CT2: This second generation cordless telephone uses digitized speech and digital transmission, thereby offering a clearer voice signal than analog CT1. Another advantage is that CT2 does not suffer from the inherent interference problems associated with CT1. DECT: DECT stands for digital enhanced cordless telecommunications. The DECT specification was developed by European Telecommunications Standards Institute (ETSI) and operates throughout Europe in the frequency band 1880 to 1900 MHz. DECT provides cordless telephones with the greater range, up to several hundred meters, allows encryption, provides for greater number of handsets, and even allows data communication. It uses high-frequency signals (1.88 to 1.9 GHz) and also employs time division multiple access (TDMA), which allows several conversations to share the same frequency. Although CT1, CT2, and DECT are European standards, the US PCS standards have followed these models too. DECT is being adopted increasingly worldwide. PHS: The personal hand-phone system (PHS) was introduced in Japan in 1995 for private use as well as for PCS. Unlike conventional cellular telephone systems, the PHS system employs ISDN technology. With PHS, a subscriber can have two separate telephone numbers: one for the home and the other for outside the home. The PHS system uses TDMA format because of the flexibility for call control and economy—characteristics common to the cellular system. To allow for two-way communication, forward and reverse channels are located on the same frequency by employing time-division duplex (TDD). It employs carrier spaced 300 kHz apart over a 23-MHz band from 1895 to 1918 MHz. Each carrier supports four channels—one control channel broadcast on a carrier, while three speech channels broadcast on other carrier waves. PHS is attracting attention around the world, particularly in Asian nations. ISM: The 900-MHz digital spread spectrum (DSS) cordless telephone operates in the 902 to 928 MHz industrial-scientific-medical (ISM) band. The spread spectrum systems have the additional advantage of enhanced security. The channel spacing is 1.2 MHz and there are 21 nonoverlapping channels in the band. The system is operated using TDD at a frame rate of 250 Hz. It provides clear sound, superb range, and security. It has a greater output power than other cordless phones. This increased power dramatically boosts range. The 2.4GHz DSS cordless telephone is an upgrade of this.

Cordless telephones are categorized by the radio frequency used and whether transmission between the handset and base unit is in the form of analog or digital signals. Generally speaking, the clarity of a cordless telephone improves with the use of higher frequencies and digital technology. Regulatory authorities in each country also specify and allocate the frequencies that may be used by cordless telephones in their respective countries and all telephones intended for use in their countries must receive their approvals. All cordless telephones are approved in the respective markets in which they are sold. Table 17.5.2 shows the common cordless telephone standards and their respective frequency range. In common with all areas of communications, the trend is away from analog systems to digital systems.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.87

WIRELESS NETWORKS WIRELESS NETWORKS

17.87

TABLE 17.5.2 Comparison of Cordless Telephone Standards Analog cordless telephone Standard Region Frequency (MHz) Range (km)

Digital cordless telephone

CT1 Europe 914/960

JCT Japan 254/380

900 MHz worldwide 900

DECT Europe 1880–1990

PHS Japan 1895–1918

ISM USA 2400–2485

Up to 7

0.3

0.25

0.4

0.2

0.5

JCT—Japanese cordless telephone; PHS—Personal Hand-phone System.

PAGING Paging started as early as 1921 when the concept of one-way information broadcasting was introduced. The 1930s saw the widespread use of radio paging by government agencies, police departments, and the armed forces in the United States. Paging systems have undergone dramatic development. Radio transmission technology has advanced and so are the computer hardware and firmware (computer program) used in radio-paging systems.

One-Way Pagers A paging system is a one-way wireless messaging system that allows continuous accessibility to someone away from the wired communications network. In its most basic form, the person on-the-move carries a palm-sized device (the pager) that has an identification number. The calling party inputs this number, usually through the public telephone network, to the paging system which then signals the pager to alert the called party. Early paging systems were nonselective and operator assisted. Not only did it waste airtime, the system was inconvenient, labor-intensive, and offered no privacy. With automatic paging, a telephone number is assigned to each pager and the paging terminal can automatically signal for voice input from the calling party. The basic paging system consists of the following components:

• Input Source: A caller enters a page from a phone or through an operator. Once it is entered, the page is sent through the public switched telephone network (PSTN) to the paging terminal for encoding and transmission through the paging system. • Encoder: The encoder typically accepts the incoming page, checks the validity of the pager number, looks up the database for the subscriber’s pager address, and converts it into the appropriate paging signaling protocol. The encoded paging signal is then sent to the transmitters (base stations). • Base Station: This transmits page codes on an assigned radio frequency. Most base stations are designed specifically for paging but some of those designed for two-way voice can also be used. • Page Receivers: These are the pagers, which are basically FM receivers turned to the same RF frequency as the paging base station in the system. A decoder in each pager recognizes the unique code assigned to the pager and rejects all other codes for selective alerting. The most basic function of the pager is alerting. On receiving its own paging code, the receiver sets off an alert that can be audible (tone), visual (flashing indicator), or silent (vibrating). Messaging functions can also include voice and/or display (numeric/alphanumeric) messaging. Today’s paging systems offer much more than the basic system described above. A paging system subscriber can be alerted anytime and at almost any place as coverage can be easily extended, even across national borders. Paging systems are increasingly migrating from tone and numeric paging, to alphanumeric paging. Alphanumeric pagers display alphabetic or numeric messages entered by the calling party. The introduction of alphanumeric pagers also enables important information/data (e.g., business, financial news) to be constantly updated and monitored. Pagers that can display different ideographic languages, e.g., Chinese and Japanese, are now available in the market. The specific language supported is determined by the computer program installed in the pager.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.88

WIRELESS NETWORKS 17.88

TELECOMMUNICATIONS

Two-Way Pagers The conventional paging systems are designed for one-way communication—from network toward pagers. Such systems provide the users one or more of the following services: beep, voice messaging, numeric messaging, and alphanumeric messages. With the recent development in paging systems, it is possible to supply a reverse link and thus allow two-way paging services. Two-way paging offers some significant capabilities with distinct advantages. Two-way paging is essentially alphanumeric paging that lets the pager send messages, either to respond to received messages or to originate its own messages. The pagers come in various shapes and sizes. Some look almost like today’s alphanumeric pagers, while some add small keyboards and larger displays. Two-way paging networks employ the 900-MHz band, a small fraction of the spectrum originally meant for PCS. Networks are built around protocols. Two-way messaging network is based on Reflex, which is basically an extension of Motorola’s Flex protocol for one-way paging. Reflex is a new generation of paging protocols. But reflex is a proprietary protocol. In view of the fact that two-way paging is the dominant application for wide area wireless networks and that a set of open, nonproprietary protocols is required to enable the convergence of the two-way paging and the Internet, efficient mail submission and delivery (EMSD) has been designed. EMSD is an open, efficient, Internet messaging protocol that is highly optimized for short messages. Devices that provide two-way paging capabilities should use this protocol (e.g., dedicated pagers, cell phones, palm PCs, handheld PCs, laptops, and desktops). The majority of pagers are used for the purpose of contacting someone on the move. The most popular type of pager used for this application is the alphanumeric pager that displays the telephone number to call back after alerting the paging subscriber. Although they are not common or cheap, the trend toward alphanumeric paging is inevitable with improved speed and better pagers. There will be more varied applications of paging such as the sending of e-mail, voice mail, faxes, or other useful information to a pager, which will also take on more attractive and innovative forms. Future pagers may compete more aggressively with other twoway technologies, such as cellular and PCS. Although paging does not provide real-time interactive communications between the caller and the called party, it has some advantages over other forms of PCS, such as cellular telephone. These include less bandwidth requirement, larger coverage area, lower cost, and lighter weight. Owing to these advantages, paging service is bound to be in a strong competitive PCS services in years to come. With more than 51 million paging subscribers worldwide, major paging markets in the world, especially in Asia, continue to expand rapidly. With the use of pagers getting more and more integrated into our daily lives, we will be seeing a host of new and exciting applications. There will emerge satellite pagers, which will send and receive messages through satellite systems such as Iridium, ICO, and Globalstar. With the help of such pagers, it is possible to supply paging services in global scale.

CELLULAR NETWORKS The conventional approach to mobile radio involved setting up a high-power transmitter on top of the highest point in the coverage area. The mobile telephone must have a line-of-sight to the base station for proper coverage. Line-of-sight transmission is limited to as much as 40 to 50 miles on the horizon. Also, if a mobile travels too far from its base station, the quality of the communications link becomes unacceptable. These and other limitations of conventional mobile telephone systems are overcome by cellular technology. Areas of coverage are divided into small hexagonal radio coverage units known as cells. A cell is the basic geographic unit of a cellular system. A cellular communications system employs a large number of low-power wireless transmitters to create the cells. These cells overlap at the outer boundaries, as shown in Fig. 17.5.5. Cells are base stations transmitting over small geographic areas that are represented as hexagons. Each cell size varies depending on the landscape and tele-density. Those stick towers one sees on hilltops with triangular structures at the top are cellular telephone sites. Each site typically covers an area of 15 miles across, depending on the local terrain. The cell sites are spaced over the area to provide a slightly overlapping blanket of coverage. Like the early mobile systems, the base station communicates with mobiles via a channel. The channel is made of two frequencies—one frequency (the forward link) for transmitting information to the base station and the other frequency (the reverse link) to receive from the base station.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.89

WIRELESS NETWORKS WIRELESS NETWORKS

17.89

FIGURE 17.5.5 A typical wireless seven-cell pattern; cells overlap to provide greater coverage.

Fundamental Features Besides the idea of cells, the essential principles of cellular systems include cell splitting, frequency reuse, handover, capacity, spectral efficiency, mobility, and roaming.

• Cell Splitting: As a service area becomes full of users, the single area is split into smaller ones. This way, urban regions with heavy traffic can be split into as many areas as necessary to provide acceptable service, while large cell can be used to cover remote rural regions. Cell splitting increases the capacity of the system. • Frequency Reuse: This is the core concept that defines the cellular system. The cellular-telephone industry is faced with a dilemma: services are growing rapidly and users are demanding more sophisticated call-handling features, but the amount of the EM spectrum allocation for cellular service is fixed. This dilemma is overcome by the ability to reuse the same frequency (channel) many times. Several frequency-reuse patterns are in use in the cellular industry, each with its advantages and disadvantages. A typical example is shown in Fig. 17.5.6, where all the available channels are divided into 21 frequency groups numbered 1 to 21. Each cell is assigned three frequency groups. For example, the same frequencies are reused in cell designated 1 and adjacent locations do not reuse the same frequencies. A cluster is a group of cells; frequency reuse does not apply to clusters. • Handoff: This is another fundamental feature of the cellular technology. When a call is in progress and the switch from one cell to another becomes necessary, a handoff takes place. Handoff is important because as a mobile user travels from one cell to another during a call, as adjacent cells do not use the same radio channels, a call must be either dropped or transferred from one channel to another. Dropping the call is not acceptable. Handoff was created to solve the problem. A number of algorithms are used to generate and process a handoff request and eventual handoff order. Handing off from cell to cell is the process of transferring the mobile unit that has a call on a voice channel to another voice channel, all done without interfering with the call. The need for handoff is determined by the quality of the signal, whether it is weak or strong. A handoff threshold is predefined. When the received signal level is weak and reaches the threshold, the system provides

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.90

WIRELESS NETWORKS 17.90

TELECOMMUNICATIONS

FIGURE 17.5.6 Frequency reuse in a seven-cell pattern cellular system.

a stronger channel from an adjacent cell. This handoff process continues as the mobile moves from one cell to another as long as the mobile is in the coverage area. • Mobility and Roaming: Mobility implies that a mobile user while in motion will be able to maintain the same call without service interruption. This is made possible by the built-in-handoff mechanism that assigns a new frequency when the mobile moves to another cell. Because of several cellular operators within the same region using different equipment and a subscriber is only registered with one operator, some form of agreement is necessary to provide services to subscribers. Roaming is the process whereby a mobile moves out of its own territory and establishes a call from another territory. If we consider a cell (an area) with a perimeter L where r mobile units per unit area are located, the average number of users M crossing the cell boundaries per unit time is M=

ρVL π

where V is the average velocity of the mobile units.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(6)

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.91

WIRELESS NETWORKS WIRELESS NETWORKS

17.91

• Capacity: This is the number of subscribers that can use the cellular system. For an FDMA system, the capacity is determined by the loading (no. of calls and the average time per call) and system layout (size of cells and amount of frequency reuse utilized). Capacity expansion is required because cellular systems must serve more subscribers. It takes place through frequency reuse, cell splitting, planning, and redesigning of the system. • Spectral Efficiency: This a performance measure of the efficient use of the frequency spectrum. It is the most desirable feature of a mobile communication system. It produces a measure of how efficiently space, frequency, and time are used. Expressed in channels/MHz/km2, channel efficiency is given by

η=

total no. of channels available in the sysstem bandwidth × total coverage area

Bw N c × Bc N 1 η= = Bw × N c × Ac Bc × N × Ac

(7)

where Bw = bandwidth of the system in MHz Bc = channel spacing in MHz Nc = number of cells in a cluster N = frequency reuse factor of the system Ac = area covered by a cell in km2

Cellular System A typical cellular network is shown in Fig. 17.5.7. It consists of the following three major hardware components [3]:

• Cell Site (Base Stations): The cell site acts as the user-to-MTSO interface, as shown in Fig. 17.5.7. It consists of a transmitter and two receivers per channel, an antenna, a controller, and data links to the cellular office. Up to 12 channels can operate within a cell depending on the coverage area. • Mobile Telephone Switching Office (MTSO): This is the physical provider of connections between the base stations and the local exchange carrier. MTSO is also known as mobile switching center (MSC) or digital multiplex switch-mobile telephone exchange (DMS-MTX) depending on the manufacturer. It manages and controls cell site equipment and connections. It supports multiple-access technologies such as AMPS, TDMA, CDMA, and CDPD. As a mobile moves from one cell to another, it must continually send messages to the MTSO to verify its location. • Cellular (Mobile) Handset: This provides the interface between the user and the cellular system. It is essentially a transceiver with an antenna and is capable of tuning to all channels (666 frequencies) within a service area. It also has a handset and a number assignment module (NAM), which is a unique address given to each cellular phone.

Cellular Standards Because of the rapid development of cellular technology, different standards have resulted. These include:

• Advanced Mobile Phone System (AMPS): This is the standard introduced in 1979. Although it was developed and used in North America, it has also been used in over 72 countries. It operates in the 800-MHz frequency band. It is based on FDMA. The mobile transmit channels are in the 825- to 845-MHz range, while the mobile receive channels are in the 870- to 890-MHz range. There is also the digital AMPS, which is also known as TDMA (or IS-54). FDMA systems allow for a single mobile telephone to call on a radio channel;

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.92

WIRELESS NETWORKS 17.92

TELECOMMUNICATIONS

FIGURE 17.5.7 A typical cellular network.

each voice channel can communicate with only one mobile telephone at a time. TDMA systems allow several mobile telephones to communicate all the same time on a single radio carrier frequency. This is achieved by dividing their signal into time slots. • IS-54 and IS-95: The IS-54 is a North American standard developed by the Electronic Industries Association (EIA) and the Telecommunications Industry Association (TIA) to meet the growing demand for cellular capacity in high-density areas. It is based on TDMA and it retains the 30-kHz channel spacing of AMPS to facilitate evolution from analog to digital systems. The IS-95 standard was also adopted by EIA/TIA. It is based on CDMA, a spread-spectrum technique that allows many users to access the same band by assigning a unique orthogonal code to each user. • Global System for Mobile Communications (GSM): This is a digital cellular standard developed in Europe and designed to operate in the 900-MHz band. It is a globally accepted standard for digital cellular communication. It uses a 200-kHz channel divided into eight time slots with frequency division multiplexing (FDM). The technology allows international roaming and provides integrated cellular systems across different national borders. GSM is the most successful digital cellular system in the world. It is estimated that many countries outside Europe will join the GSM partnership. • Personal Digital Cellular (PDC): This is a digital cellular standard developed in Japan. It was designed to operate in 800-MHz and 1.5-GHz bands.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.93

WIRELESS NETWORKS WIRELESS NETWORKS

17.93

• Future Public Land Mobile Telecommunication Systems (FPLMTS): This is a new standard being developed in ITU to form the basis for third-generation wireless systems. It will consolidate today’s increasingly diverse and incompatible mobile environments into a seamless infrastructure that will offer a diverse portfolio of telecommunication services to an exponentially growing number of mobile users on a global scale. It is a digital system based on 1.8- to 2.2-GHz band. It is being tested to gain valuable user and operator experience. In many European countries, the use of GSM has allowed cross-country roaming. However, global roaming has not been realized because there are too many of these incompatible standards.

PERSONAL COMMUNICATION SYSTEMS The GSM digital network has pervaded Europe and Asia. A comparable technology known as PCS is beginning to make inroads in the United States. According to FCC, “PCS is the system by which every user can exchange information with anyone, at anytime, in any place, through any type of device, using a single personal telecommunication number (PTN).” PCS is an advanced phone service that combines the freedom and convenience of wireless communications with the reliability of the legacy telephone service. Both GSM and PCS promise clear transmissions, digital capabilities, and sophisticated encryption algorithms to prevent eavesdropping. PCS is a new concept that will expand the horizon of wireless communications beyond the limitations of current cellular systems to provide users with the means to communicate with anyone, anywhere, anytime. It is called PCS by the FCC or personal communications networks (PCN) by the rest of the world. Its goal is to provide integrated communications (such as voice, data, and video) between nomadic subscribers irrespective of time, location, and mobility patterns. It promises near-universal access to mobile telephony, messaging, paging, and data transfer. PCS/PCN networks and the existing cellular networks should be regarded as complimentary rather than competitive. One may view PCS as an extension of the cellular to the 1900-MHz band, using identical standards. Major factors that separate cellular networks from PCS networks are speech quality, complexity, flexibility of radio-link architecture, economics of serving high-user-density or low-user-density areas, and power consumption of the handsets. Table 17.5.3 summarizes the differences between the two technologies and services. PCS offers a number of advantages over traditional cellular communications:

• A truly personal service, combining lightweight phones with advanced features such as paging and voice • • • • •

mail that can be tailored to each individual customer. Less background noise and fewer dropped calls An affordable fully integrated voice and text messaging that works just about anywhere, anytime A more secure all-digital network that minimizes chances of eavesdropping or number cloning An advanced radio network that uses smaller cell sites A state-of-the-art billing and operational support system

TABLE 17.5.3

Comparison of Cellular and PCS Technologies

Cellular Fewer sites required to provide coverage More expensive equipment Higher costs for airtime High antenna and more space needed for site Higher power output

PCS More sites required to provide coverage (e.g., a 20:1 ratio) Less expensive cells Airtime costs dropping rapidly Smaller space for the microcell Lower power output

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.94

WIRELESS NETWORKS 17.94

TELECOMMUNICATIONS

FIGURE 17.5.8 Various cell sizes.

Basic Features PCS refers to digital wireless communications and services operating at broadband (1900 MHz) or narrowband (900 MHz) frequencies. Thus there are three categories of PCS: broadband, narrowband, and unlicensed. Broadband PCS addresses both cellular and cordless handset services, while narrowband PCS focuses on enhanced paging functions. Unlicensed service is allocated from 1890 to 1930 MHz and is designed to allow unlicensed short-distance operation. The salient features that enable PCS to provide communications with anyone, anywhere, anytime include:

• Roaming Capability: The roaming service should be greatly expanded to provide universal accessibility. PCS • •

• •

will have the capability to support global roaming. Diverse Environment: Users must be able to use the PCS in all types of environments, e.g., urban, rural, commercial, residential, mountains, and recreational area. Various Cell Size: With PCS, there will be a mix of broad types of cell sizes: the picocell for low power indoor applications, the microcell for lower power outdoor pedestrian application, macrocell for high power vehicular applications, and supermacrocell with satellites, as shown in Fig. 17.5.8. For example, a picocell of a PCS will be in the 10 to 30 m range; a microcell may have a radius of 50 to 150 m; and a macrocell may have a radius of 1 km. Portable Handset: PCS provides a low-power radio, switched access connection to the PSTN. The user should be able to carry a single, small, universal handset outside without having to recharge its batter. Single PTN: The user can be reached through a single PTN regardless of the location and the type of service used.

The FCC frequency allocation for PCS usage is significant. FCC allocated 120 MHz for licensed operation and another 20 MHz for unlicensed operation, amounting to a total of 140 MHz for PCS, which is three times the spectrum currently allocated for cellular network. The FCC’s frequency allocation for PCS is shown in Tables 17.5.4 and 17.5.5 for licensed and unlicensed operators. To use the PCS licensed frequency band, a company must obtain a license from FCC. To use the unlicensed (or unregulated) PCS spectrum, a company must use equipment that will conform with the FCC unlicensed requirements that include low power transmission to prevent interference with other users in the same frequency band. PCS Architecture A PCS network is a wireless network that provides communication services to PCS subscribers. The service area of the PCS network is populated with base stations, which are connected to a fixed wireline network

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.95

WIRELESS NETWORKS WIRELESS NETWORKS

17.95

TABLE 17.5.4 The PCS Frequency Bands for Licensed Operation

Block

Spectrum low side (MHz)

Spectrum high side (MHz)

1850–1865 1865–1870 1870–1885 1885–1890 1890–1895 1895–1910

1930–1945 1945–1950 1950–1965 1965–1970 1970–1975 1975–1990

A D B E F C Total

Bandwidth (MHz) 30 10 30 10 10 30 120

through mobile switch centers (MSCs). Like a cellular network, the radio coverage of a base station is called a cell. The base station locates a subscriber or mobile unit and delivers calls to and from the mobile unit by means of paging within the cell it serves. PCS architecture resembles that of a cellular network with some differences. The structure of the local portion of a PCS network is shown in Fig. 17.5.9. It basically consists of five major components:

• • • • •

Terminals installed in the mobile unit or carried by pedestrians Cellular base stations to relay signals Wireless switching offices that handle switching and routing calls Connections to PSTN central office Database of customers and other network-related information

Since the goal of PCS is to provide anytime-anywhere communication, the end device must be portable and both real-time interactive communication (e.g., voice) and data services must be available. PCS should be able to integrator or accommodate the current PSTN, ISDN, the paging system, the cordless system, the wireless PBX, the terrestrial mobile system, and the satellite system. The range of applications associated with PCS is depicted in Fig. 17.5.10.

PCS Standards The Joint Technical Committee (JTC) has been responsible for developing standards for PCS in the United States. The JTC committee worked cooperatively with the TIA committee working on the TR-46 reference model and ATIS committee working on the T1P1 reference model. Unlike GSM, PCS is unfortunately not a single standard but a mosaic consisting of several incompatible versions coexisting rather uneasily with one another. One major obstacle to PCS adoption in the United States has been the industry’s failure to sufficiently convince customers on the advantages of PCS over AMPS, which already offers a single standard. This places the onus on manufacturers to inundate phones with features that attract market attention without compromising the benefits inherent in cellular phones. However, digital cellular technology enjoys distinct advantages. Perhaps the most significant advantage involves security because one cannot adequately encrypt AMPS signals.

TABLE 17.5.5 The PCS Frequency Bands for Unlicensed Operation Block

Spectrum (MHz)

Bandwidth (MHz)

Isochronous Asynchronous Total

1910–1920 1920–1930

10 10 20

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.96

WIRELESS NETWORKS 17.96

TELECOMMUNICATIONS

FIGURE 17.5.9 Structure of PCS network.

Satellites are instrumental in achieving global coverage and providing PCS services. Mobile satellite communications for commercial users is evolving rapidly toward PCS systems to provide basic telephone, fax, and data services virtually anywhere on the globe. Satellite orbits are being moved closed to the earth, improving communication speed and enabling PCS services. Global satellite systems are being built for personal communications. In the United States, the FCC licensed five such systems: Iridium, Globalstar,

FIGURE 17.5.10 Range of applications associated with PCS.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.97

WIRELESS NETWORKS WIRELESS NETWORKS

17.97

Odyssey, Ellipso, and Aries. In Europe, ICO-Global is building ICO. Japan, Australia, Mexico, and India are making similar effort. Future growth and success of PCS services cannot be taken for granted. Like any new technology, the success of PCS system will depend on a number of factors. These include initial system overall cost, quality, and convenience of the service provided, and cost to subscribers.

WIRELESS DATA NETWORKS Wireless data networks are designed for low speed data communications. The proliferation of portable computers coupled with the increasing usage of the Internet and the mobile user’s need for communication is the major driving force behind these networks. Examples of such networks include CDPD, wireless LAN, and wireless ATM.

Cellular Digital Packet Data Cellular digital packet data is the latest in wireless data communication. CDPD systems offer one of the most advanced means of wireless data transmission technology. CDPD is a cellular standard aimed at providing Internet protocol (IP) data service over the existing cellular voice networks and circuit switched telephone networks. The technology solves the problem of business individuals on the move who must communicate data between their work base and remote locations. The idea of CDPD was formed in 1992 by a development consortium with key industry leaders including IBM, six of the seven regional Bell operating companies, and McCaw Cellular. The goal was to create a uniform standard for sending data over existing cellular telephone channel. The Wireless Data Forum (www.wirelessdata.org), formerly known as CDPD Forum, has emerged as a trade association for wireless data service providers and currently has over 90 members. CDPD has been defined by the CDPD Forum CDPD Specification R1.1 and operates over AMPS. By building CDPD as an overlay to the existing cellular infrastructure and using the same frequencies as cellular voice, carriers are able to minimize the capital expenditures. It costs approximately $1 million to implement a new cellular cell site and only about $50,000 to build the CDPD overlay to an existing site. CDPD is designed to exploit the capabilities of the advanced cellular mobile services (AMPS) infrastructure throughout North America. One weakness of cellular telephone channels is that there are moments when the channels are idle (roughly 30 percent of the air time is unused). CDPD exploits this by detecting and using the otherwise wasted moments by sending packets during the idle time. As a result, data are transmitted without affecting voice system capability. CDPD transmits digital packet data at 19.2 kbps using idle times between cellular voice calls on the cellular telephone network. CDPD has the following features:

• • • •

It is an advanced form of radio communication operating in the 800- and 900-MHz bands. It shares the use of the AMPS radio equipment on the cell site. It supports multiple, connectionless sessions. It uses the Internet protocol (IP) and the open systems interconnection (OSI) connectionless network protocol (CLNP). • It is fairly painless for users to adopt. To gain access to CDPD infrastructure, one only requires a special CDPD modem. • It supports both the TCP/IP protocols as well as the international set of equivalent standards. • It was designed with security in mind unlike other wireless services. It provides for encryption of the user’s data as well as conceals the user’s identity over the air link. CDPD provides the following services:

• Data rate of 19.2 kbps.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.98

WIRELESS NETWORKS 17.98

TELECOMMUNICATIONS

FIGURE 17.5.11 Major components of a CDPD network.

• Connectionless as the basic service; a user may build a connection-oriented service on top of that if desired. • All three modes of point-to-point, multicast, and broadcast are available. • Security that involves authentication of users and data encryption. CDPD is a packet switched data transfer technology that employs radio frequency (RF) and spectrum in existing analog mobile phone system such as AMPS. The CDPD overlay network is made of some major components that operate together to provision the overall service. The key components that define CDPD infrastructure are illustrated in Fig. 17.5.11. They are as follows:

• Mobile End System (MES): This is the subscriber’s device for gaining access to the wireless communication services offered by a CDPD service. It is any mobile computing device, which is an equipment with a CDPD modem. Examples of an MES are laptop computers, palmtop computers, and personal digital assistants (PDAs), or any portable computing devices. • Fixed End System (FES): This is a stationary computing device (e.g., a host computer, a UNIX workstation, and so forth) connected to landline networks. The FES is the final destination of the message sent from an MES. • Intermediate System (IS): This is made up of routers that are CDPD compatible. It is responsible for routing data packets into and out of the CDPD service provider network. It may also perform gateway and protocol conversion functions to aid network interconnection. • Mobile Data Base Station (MDBS): CDPD uses a packet switched system that splits data into small packets and sends them across the voice channel. This involves detecting idle time on the voice channel and sending the packets on the appropriate unoccupied voice frequencies. This detection of unoccupied frequencies and sending of packets is done by the MDBS. Thus, the MDBS is responsible for relaying data between the mobile units and the telephone network. In other words, it relays data packets from the MES to the mobile data intermediate system (MDIS) and vice versa.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.99

WIRELESS NETWORKS WIRELESS NETWORKS

17.99

• Mobile Data Intermediate System: MDBSs that service a particular cell can be grouped together and connected to the backbone router, also known as the MDIS. The MDIS units form the backbone of the CDPD network. All mobility management functions are taken care of by MDIS. In other words, the MDIS is responsible for keeping track of the MES’s location and routing data packets to and from the CDPD network and the MES appropriately. Very little new equipment is needed for CDPD service since existing cellular networks are used. Only the MDBSs are to be added to each cell. One can purchase CDPD cellular communication systems for Windows or MS-DOS computers. The hardware can be a handheld AMPS telephone or a small modem which can be attached to a notebook computer. One would need to put up the antenna on the modem. In order to effectively integrate voice and data traffic on the same cellular network without degrading the service provided for the voice customer, the CDPD network employs a technique known as channel hopping. When a mobile unit wants to transmit, it checks for an available cellular channel. Once a channel is found, the data link is established and the mobile unit can use the assigned channel to transmit as long as the channel is not needed for voice communication. Because voice is king, data packets are sent after giving priority to voice traffic. Therefore, if a cellular voice customer needs the channel, it will take priority over the data transmission. In that case, the mobile unit is advised by the MDBS to “hop” to another available channel. If there are no other available channels, then extra frequencies purposely set aside for CDPD can be used. This is a rare situation because each cell typically has 57 channels and each channel has an average idle time of 25 to 30 percent. The process of establishing and releasing channel links is called channel hopping and it is completely transparent to the mobile data unit. It ensures that the data transmission does not interfere with the voice transmission. It usually occurs within the call setup phase of the voice call. The major disadvantage of channel hopping is the potential interference to the cellular system. CDPD has been referred to as an “open” technology because it is based on the OSI reference model, as shown in Fig. 17.5.12. The CDPD network comprises many layers: layer 1 is the physical layer; layer 2 is the data link layer; and layer 3 is the network layer; and so forth. For example, the physical layer corresponds to a functional entity that accepts a sequence of bits from the medium access control (MAC) layer and transforms them into a modulated waveform for transmission onto a physical 30 kHz RF channel. The network can use either the ISO

FIGURE 17.5.12 OSI reference model.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.100

WIRELESS NETWORKS 17.100

TELECOMMUNICATIONS

connectionless network protocol (CLNP) or the transmission control protocol/Internet protocol (TCP/IP). For now, CDPD can coexist with PCS and CDMA-based infrastructure.

Wireless LAN Wireless local area network (WLAN) is a new form of communication system. It is basically a local area network, confined to a geographically small area such as a single building, office, store, or campus, that provides high data connectivity to mobile stations. Using electromagnetic airwaves (radio frequency or infrared), WLANs transmit and receive data over the air. A WLAN suggests less expensive, fast, and simple network installation and reconfiguration. WLAN does not compete with wired LAN. Rather, WLANs are used to extend wired LANs for convenience and mobility. Wireless links essentially fill in for wired links using electromagnetic radiation at radio or light frequencies between transceivers. A typical WLAN consists of an access point and the WLAN adapter installed on the portable notebook. The access point is a transmitter/receiver (transceiver) device; it is essentially the wireless equivalent of a regular LAN hub. An access point is typically connected with the wired backbone network at a fixed location through a standard Ethernet cable and communicates with wireless devices by means of an antenna. WLANs operate within the prescribed 900-MHz, 2.4-GHz, and 5.8-GHz frequency bands. Most LANs use 2.4-GHz frequency bands because it is most widely accepted. A wireless link can provide services in several ways. One way is to act as a stand-alone WLAN for a group of wireless nodes. This can be achieved using topologies similar to wired LAN, namely, a star topology can be formed with central hub controlling the wireless nodes, a ring topology with each wireless node receiving or passing information sent to it or a bus topology with each wireless capable of hearing everything said by all the other nodes. A typical WLAN configuration is shown in Fig. 17.5.13.

FIGURE 17.5.13 Connection of a wired LAN to wireless nodes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.101

WIRELESS NETWORKS WIRELESS NETWORKS

17.101

When designing WLANs, manufacturers have to choose from two main technologies that are used for wireless communications today: radio frequency (RF) and infra red (IR). Each technology has its own merits and demerits. RF is used for applications where communications are over long distances and are not line-of-sight. In order to operate in the license free portion of the frequency spectrum known as the ISM band (industrial, scientific, and medical), the RF system must use a modulation technique called spread spectrum (SS). The second technology used in WLAN is infra red, where the communication is carried by light in the invisible part of the spectrum. It is primarily used for very short distance communications (less than 1 m), where there is a line-of-sight connection. Since IR light does not penetrate solid materials (it is even attenuated greatly by window glass), it is not really useful in comparison to RF in WLAN system. However, IR is used in applications where the power is extremely limited such as a pager. Wireless ATM Asynchronous transfer mode technology is the result of efforts to devise a transmission and networking technology to provide high-speed broadband integrated services: a single infrastructure for data, voice, and video. Until recently, the integration of wireless access and mobility with ATM has received little attention. The concept of wireless ATM (WATM) was first proposed in 1992. It is now regarded as the potential framework for next generation wireless broadband communications that will support integrated quality-of-service (QoS) multimedia services. WATM technology is currently migrating from research stage to standardization and early commercialization.

FIGURE 17.5.14 A typical wireless ATM network.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.102

WIRELESS NETWORKS 17.102

TELECOMMUNICATIONS

Wireless ATM network is basically the wireless extension of fixed ATM network. The 53-byte ATM cell is too big for wireless ATM network. Therefore, WATM networks may use 16 or 24 bytes payload. Thus, in a wireless ATM network, information is transmitted in the form of a large number of small transmission cells called picocells. Each picocell is served by a base station, while all the base stations in the network are connected via the wired ATM network. The ATM header is compressed or expanded to standard ATM cell at the base station. Base stations are simple cell relays that translate the header formats from the wireless ATM network to the wired ATM network. ATM cells are transmitted via radio frames between a central station (B-CS) and user radio modules (B-RM) as shown in Fig. 17.5.14. All base stations operate on the same frequency so that there is no hard boundary between picocells. Reducing the size of the picocells helps in mitigating some of the major problems related to wireless LANs. The main difficulties encountered are the delay spread because of multipath effects and the lack of a line-ofsight path that results in high attenuation. Also, small cells have some drawbacks compared to large cells. From Fig. 17.5.14, we notice that a wireless ATM typically consists of three major components: (1) ATM switches with standard UNI/NNI capabilities, (2) ATM base stations, and (3) wireless ATM terminal with a radio network interface card (NIC). There are two new hardware components: ATM base station and WATM NIC. The new software components are the mobile ATM protocol extension and WATM UNI driver. In conventional mobile networks, transmission cells are “colored” using frequency-division multiplexing or code-division multiplexing to present interference between cells. Coloring is considered a waste of bandwidth because in order for it to be successful there must be areas between reuse of the color in which it is idle. These inactive areas are wasted rather than be used for transmission. Wireless ATM architecture is based on integration of radio access and mobility features. The idea is to fully integrate new wireless physical layer (PHY), medium access control (MAC), and data link control (DLC), wireless control and mobility signaling functions into the ATM protocol stack. Wireless ATM is not as matured as wireless LAN. No standards have been defined by either ITU-T or ATM Forum. However, the ATM Forum’s WATM Working Group (started in June 1996) is developing specifications that will facilitate deployment of WATM.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.103

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 17.6

DATA NETWORKS AND INTERNET Matthew N. O. Sadiku

The coming of the information age has brought about unprecedented growth in telecommunications-based services, driven primarily by the Internet, the information superhighway. Within a short period of time, the volume of data traffic transported across communications networks has grown rapidly and now exceeds the volume of voice traffic. While voice networks, such as the ubiquitous telephone network, have been in use for over a century, computer data networks are a recent phenomenon. A computer communications network is an interconnection of different devices to enable them to communicate among themselves. Computer networks are generally classified into three groups on the basis of their geographical scope: local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs). These networks differ in geographic scope, type of organization using them, types of services provided, and transmission techniques. LANs and WANs are well-established communication networks. MANs are relatively new. On the one hand, LAN is used in connecting equipments owned by the same organization over relatively short distances. Its performance degrades as the area of coverage becomes large. Thus LANs have limitations of geography, speed, traffic capacity, and the number of stations they are able to connect. On the other hand, WAN provides long-haul communication services to various points within a large geographical area, e.g., a nation or continent. With some of the characteristics of LANs and some reflecting WANs, the MAN embraces the best features of both. We begin this chapter by looking at the open systems interconnection (OSI) reference model, which is commonly used to describe the functions involved in data communication networks. We then examine different LANs, MANs, and WANs including the Internet.

OSI REFERENCE MODEL There are at least two reasons for needing a standard protocol architecture such as the OSI reference model. First, the uphill task of understanding, designing, and constructing a computer network is made more manageable by dividing it into structured smaller subtasks. Second, the proliferation of computer systems has created heterogeneous networks—different vendors, different models from the same vendor, different data formats, different network management protocols, different operating systems, and so on. A way to resolve this heterogeneity is for vendors to abide by the same set of rules. Attempts to formulate these rules have preoccupied standards bodies such as International Standards Organization (ISO), Consultative Committee for International Telephone and Telegraph (CCITT) [now known as International Telecommunication Union (ITU)], Institute of Electrical and Electronics Engineers (IEEE), American National Standards Institute (ANSI), British Standards Institution (BSI), and European Computer Manufacturers Association (ECMA). Here we consider the more universal standard protocol architecture developed by ISO. The ISO divides the task of networking computers into seven layers so that manufacturers can develop their own applications and implementations within the guidelines of each layer. In 1978, the ISO set up a committee 17.103 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.104

DATA NETWORKS AND INTERNET 17.104

TELECOMMUNICATIONS

FIGURE 17.6.1 OSI reference model.

to develop a seven-layer model of network architecture (initially for WANs), known as the OSI. The model serves as a means of comparing different layers of communication networks. Also, the open model is standardbased rather than proprietary-based; one system can communicate with another system using interfaces and protocols that both systems understand. Network users and vendors have “open systems” in which any standard computer device would be able to interoperate with others. The seven layers of the OSI model are shown in Fig. 17.6.1 and briefly explained as follows. We begin with the application layer (layer 7) and work our way down.

• Application Layer: This layer (layer 7) allows transferring information between application processes. It is implemented with host software. It is composed of specific application programs and its content varies with individual users. By application, we mean a set of information-processing desired by the user. Typical applications (or user programs) include login, password check, wordprocessing, spreadsheet,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.105

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET













17.105

graphics program, document transfer, electronic mailing system, virtual terminal emulation, remote database access, network management, bank balance, stock prices, credit check, inventory check, and airline reservation. Examples of application layer protocols are Telnet (remote terminal protocol), file transfer protocol (FTP), simple mail transfer protocol (SMTP), remote login service (rlogin), and remote copy protocol (rcp). Presentation Layer: This layer (layer 6) presents information in a way that is meaningful to the network user. It performs functions such as translation of character sets, interpretation of graphic commands, data compression/decompression, data reformatting, and data encryption/decryption. Popular character sets include American Standard Code for Information Interchange (ASCII), Extended Binary Coded Decimal Interchange Code (EBCDIC), and Alphabet 5. Session Layer: A session is a connection between users. The session layer (layer 5) establishes the appropriate connection between users and manages dialog between them i.e., controlling starting, stopping, and synchronization of the dialog. It decides the type of communication such as two-way simultaneous (full duplex), two-way alternate (half-duplex), one-way, or broadcast. It is also responsible for checking for user authenticity and providing billing. For example, login and logout are the responsibilities of this layer. IBM’s network basic input/output system (NetBIOS), sequenced packet exchange (NetWare’s SPX), manufacturing automation protocol (MAP), and technical and office protocol (TOP) operate at this layer. Transport Layer: This layer (layer 4) uses the lower layers to establish reliable end-to-end transport connections for the higher layers. Its other function is to provide the necessary functions and protocols to satisfy a quality of service (QoS) (expressed in terms of time delay, throughput, priority, cost, and security) required by the session layer. It creates several logical connections over the same network by multiplexing end-to-end user addresses onto the network. It fragments messages from the session layer into smaller units (packets or frames) and reassembles the packets into messages at the receiving end. It also controls the end-to-end flow of packets, performs error control and sequence checking, acknowledges successful transmission of packets, and requests retransmission of corrupted packets. For example, the transmission control protocol (TCP) of TCP/IP and Internet transport protocol (ITP) of Xerox operate at this level. Network Layer: This layer (layer 3) handles routing procedure and flow control. It establishes routes (virtual circuits) for packets to travel and routes the packets from their source to destination and controls congestion. (Routing is of greater importance on MANs and WANs than on LANs.) It carries addressing information that identifies the source and ultimate destination. It also counts transmitted bits for billing information. It ensures that packets arrive at their destination in a reasonable amount of time. Examples of protocols designed for layer 3 are X.25 packet switching protocol and X.75 gateway protocol, both by CCITT. Also, the Internet protocol (IP) of TCP/IP and NetWare’s Internetwork Packet Exchange (IPX) operate at this layer. Data Link Layer: This layer (layer 2) specifies how a device gains access to the medium specified in the physical layer. It converts the bit pipe provided by the physical layer into a packet link, which is a facility for transmitting packets. It deals with procedures and services related to the node-to-node data transfer. A major difference between the data link layer and the transport layer is that the domain for the data link layer is between adjacent nodes, whereas that of the transport layer is end-to-end. In addition, the data link layer ensures error-free delivery of data; hence it is concerned with error-detection, error correction, and retransmission. The error control is usually implemented by performing checksums on all bits of a packet after a cyclic redundancy check (CRC) process. This way, any transmission errors can be detected. The layer is implemented in hardware and is highly dependent of the physical medium. Typical examples of data link protocols are binary synchronous communications (BSC), synchronous data link control (SDLC), and highlevel data link control (HDLC). For LANs and MANs, the data link layer is decomposed into the mediaaccess control (MAC) and the logical link control (LLC) sublayers. Physical Layer: This layer (layer 1) consists of a set of rules that specifies the electrical and physical connection between devices. It is implemented in hardware. It is responsible for converting raw bits into electrical signal and physically transmitting them over a physical medium such as coaxial cable or an optical fiber between adjacent nodes. It provides standards for electrical, mechanical, and procedural characteristics required to transmit the bit stream properly. It handles frequency specifications, encoding the data, defining voltage or current levels, defining cable requirements, defining the connector size, shapes, and pin number, and so on. RS-232, RS-449, X.21, X.25, V.24, IEEE 802.3, IEEE 802.4, and IEEE 802.5 are examples of physical-layer standards.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.106

DATA NETWORKS AND INTERNET 17.106

TELECOMMUNICATIONS

TABLE 17.6.1 Summary of the Functions of OSI Layers Layer

Name

Function

7

Application layer

6

Presentation layer

5

Session layer

4 3 2

Transport layer Network layer Data link layer

1

Physical layer

Transfers information between application processes Syntax conversion, data compression, and encryption Establishes connection and manages a dialog Provides end-to-end transfer of data End-to-end routing and flow control Medium access, framing, and error control Electrical/mechanical interface

A summary of the functions of the seven layers is presented in Table 17.6.1. The seven layers are often subdivided into two. The first consists of the lower three layers (physical, data link, and network layers) and is known as the communications subnetwork. The upper three layers (session, presentation, and application layers) are termed the host process. The upper layers are usually implemented by networking software on the node. The transport layer is the middle layer, separating the data-communication functions of the lower three layers and the data-processing functions of the upper layers. It is sometimes grouped with the upper layers as part of the host process or grouped with the lower layers as part of data transport.

LOCAL AREA NETWORKS A LAN is a computer network that spans a geographically small area. It consists of two or more computers that are connected together to share expensive resources such as printers, exchange files, or allow electronic communications. Most LANs are confined to a single building or campus. They connect workstations, personal computers, printers, and other computer peripherals. Users connected to the LAN can use it to communicate with each other. LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited. Also, since all the devices are located within a single establishment, LANs are usually owned and maintained by an organization. A key motivation for using LANs is to increase the productivity and efficiency of workers. LANs differ from MANs and WANs by geographic coverage, data transmission and error rates, topology and data routing techniques, ownership, and sometimes by the type of traffic. Unique characteristics that differentiate LANs include:

• • • •

LANs generally operate within a few kilometers, spanning only a small geographical area. LANs usually have very high bit rates, ranging from 1 Mbps to 10 Gbps. LANs have a very low error rate, say 1:108. A LAN is often owned and maintained by a single private company, institution, or organization using the facility. There are different kinds of LANs. The following features differentiate one LAN from another:

• Topology: The geometric arrangement of devices on the LAN. As shown in Fig. 17.6.2, this can be bus, ring, star, or tree. • Protocols: These are procedures or rules that govern the transfer of information between devices connected to a LAN. Protocols are to computer networks what languages are to humans.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.107

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

FIGURE 17.6.2 Typical LAN topologies.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

17.107

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.108

DATA NETWORKS AND INTERNET 17.108

TELECOMMUNICATIONS

FIGURE 17.6.2 (Continued )

• Media: The transmission medium connecting the devices can be twisted-pair wire, coaxial cables, or fiber optic cables. Wireless LAN use radio waves as media. Of all these media, optic fiber is the fastest but the most expensive. Common LANs include Ethernet, token ring, token bus, and star LAN. For bus or tree LANs, the most common transmission medium is coaxial cable. The two common transmission methods used on coaxial cable are baseband and broadband. A baseband LAN is characterized by the use of digital technology; binary data are inserted into the cable as a sequence of pulses using Manchester or Differential encoding scheme. A broadband LAN employs analog signaling and a modem. The frequency spectrum of the cable can be divided into channels using frequency division multiplexing (FDM). One of the most wellknown applications of broadband transmission is the community antenna television (CATV). However, baseband LANs are more prevalent. The Institute of Electrical and Electronics Engineers (IEEE) has established the following eight committees to provide standards for LANs:

• • • • • • • •

IEEE 802.1—standard for LAN/MAN bridging and management IEEE 802.2—standard for logical link control protocol IEEE 802.3—standard for CSMA/CD protocol IEEE 802.4—standard for token bus MAC protocol IEEE 802.5—standard for token ring MAC protocol IEEE 802.7—standard for broadband LAN IEEE 802.10—standard for LAN/MAN security IEEE 802.11—standard for wireless LAN

Token ring is a network architecture that uses token passing technology and ring-type network structure. Although token ring is standardized in IEEE 802.5 standard, its use has quite much faded to few organizations. Ethernet (IEEE 802.3) is the most popular and the least expensive high-speed LAN. Ethernet is a LAN architecture developed by Xerox Corp. in cooperation with DEC and Intel in 1976. The IEEE 802.3 standard refined the Ethernet and made it globally accepted. Ethernet has since become the most popular and most widely deployed LAN in the world. Conventional Ethernet uses a bus or star topology and supports data transfer rates of 10 Mbps. It uses a protocol known carrier sense multiple access with collision detection (CSMA/CD) as an access method to handle simultaneous demands. Each station or node attached to the Ethernet must sense the medium before transmitting data to see if any other station is already sending something. If the medium appears to be idle,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.109

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.109

then the station can begin to send data. If two stations sense the medium idle and transmit at the same time, collision may take place. When such a collision occurs, the two stations stop transmitting, wait, and try again later after a randomly chosen delay period. The delay period is determined using Binary Exponential Backoff. Ethernet is one of the most widely implemented LAN standards. A newer version of Ethernet, called Fast Ethernet (or 100Base-T) supports data transfer rates of 100 Mbps. Gigabit Ethernet (or 1000Base-T) delivers at 1 Gbps speed. Upcoming 10 Gbps version of Ethernet is expected to be ready by 2002. Security is an important issue with LANs since they are designed to provide access to many users. Network security is a measure designed to protect LAN users against attacks that originate from the network and other networks such as Internet connected to it. When individuals send private communications through a LAN, they desire secure communications. Currently, there are no systems in wide use that will keep data secure as they transit a public network. Several methods are being used to prevent attacks. One approach is to encrypt data as they leave one machine and decrypt it at the destination. Encryption is the fundamental tool for ensuring security in data networks. Another approach is to regulate which packets can go between two sites. For example, firewalls are placed between an organization’s LAN and the Internet. A firewall is simply a group of components that collectively form a barrier between two networks.

METROPOLITAN AREA NETWORKS Metropolitan area networks are basically an outgrowth of LANs. A variety of users and applications drive the requirements for MANs. The requirements include cost, scalability, security, reliability, compatibility with existing and future networks, and management issues. To meet these requirements, several proposals have been made for MAN protocols and architectures. Of these proposed MANs, fiber distributed data interface (FDDI) and distributed queue dual bus (DQDB) have emerged as standards that compete for use as backbones.

FDDI In the mid 1970s, it was recognized that the existing copper technology would be unsuitable for future communication networks. Optical fibers offer some benefits over copper in that they are essentially immune to electromagnetic interference (EMI), have low weight, do not radiate, and reduce electrical safety concerns. FDDI was proposed by the American National Standard Institute (ANSI) as a dual token ring that supports data rates of 100 Mbps and uses optical fiber media. An optical fiber is a thin, flexible glass or plastic structure (or waveguide) through which light is transmitted. The FDDI specification recommends an optical fiber with a core diameter of 62.5 mm and a cladding diameter of 125 mm. There are two types of optical-fiber mode: single mode and multimode. A mode is a discrete optical wave or signal that propagates down the fiber. In a single mode fiber, only the fundamental mode can propagate. In multimode fiber, a large number of modes are coupled into the cable, making it suitable for the less costly light-emitting diode (LED) light source. The advantages of fiber optics over electrical media and the inherent advantages of a ring design contribute to the widespread acceptance of FDDI as a standard. FDDI is a collection of standards formed by ANSI X3T9.5 task group over a period of 10 years. The standards produced by the task group cover physical hardware, physical and data link protocol layers, and a conformance testing standard. The original standard, known as FDDI-I, provides the basic data-only operation. An extended standard, FDDI-II, supports hybrid data and real-time applications. FDDI is a follow-on to IEEE 802.5 (token ring) in that FDDI is based on token ring mechanics. Although the FDDI MAC protocol is similar (but not identical) to token ring, there are some differences. Unlike in token ring, FDDI performs all networking monitoring and control algorithms in a distributed way among active stations and does not need an active monitor. (Hence the term “distributed” in FDDI.) Whenever any device is down, other devices reorganize and continue to function, including token initialization, fault recovery, clock synchronization, and topology control.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.110

DATA NETWORKS AND INTERNET 17.110

TELECOMMUNICATIONS

The key highlights of FDDI are summarized as follows:

• • • • • • • • • • • • • • •

ANSI standard through the X3T9.5 committee Dual counter-rotating ring topology for fault tolerance Data rate of 100 Mbps Total ring loop of size 100 km Maximum of 500 directly attached stations or devices 2 km maximum distance between stations Variable packet size (4500 bytes, maximum) 4B/5B data encoding scheme to ensure data integrity Shared medium using a timed-token protocol Variety of physical media, including fiber and twisted pair 62.5/125 mm multimode fiber-optic-based network Low bit error rate of 10–9 (one in one billion) Compatibility with IEEE 802 LANs by use of IEEE 802.2 LLC Distributed clocking to support large number of stations Support for both synchronous and asynchronous services

FDDI has two types of nodes: stations and concentrators. The stations transmit information to other stations on the ring and receive from them. Concentrators are nodes that provide additional ports for attachments of stations to the network. A concentrator receives data from the ring and forwards it to each of the connected ports sequentially at 100 Mbps. While a station may have one or more MAC, a concentrator may or may not have a MAC. As shown in Fig. 17.6.3, each FDDI station is connected to two rings, primary and secondary simultaneously. Stations have active taps on the ring and operate as repeaters. This allows the FDDI network to be so large without signal degradation. The network uses its primary ring for data transmission, while the secondary ring can be used either to ensure fault tolerance or for data. When a station or link fails, the primary and secondary rings form a single one-way ring, isolating the fault while maintaining a logical path among users, as shown in Fig. 17.6.4. Thus, FDDI’s dual-ring topology and connection management functions establish a fault-tolerance mechanism. FDDI was developed to conform with the OSI reference model. FDDI divides the physical layer of the OSI reference model into two sublayers: physical layer dependent (PMD) and physical layer (PHY), while the data link layer is split into two sublayers: media access control (MAC) and IEEE 802.2 LLC. A comparison of the FDDI architectural model to the lower two layers of the OSI model along with the summary of the functions of the FDDI standards is illustrated in Fig. 17.6.5. The FDDI MAC uses a timed-token rotation (TTR) protocol for controlling access to the medium. With the protocol, the MAC in each station measures the time that has elapsed since the station last received a token. Each station on the FDDI ring uses three timers to regulate its operation. The station management (SMT) controls the other three layers (PMD, PHY, and MAC) and ensures proper operation of the station. It handles such functions as initial FDDI ring initialization, station insertion and removal, ring’s stability, activation, connection management, address administration, scheduling policies, collection of statistics, bandwidth allocation, performance and reliability monitoring, bit error monitoring, fault detection and isolation, and ring reconfiguration. Though the original FDDI, described above, provides a bounded delay for synchronous services, the delay can vary. FDDI was initially envisioned as a data-only LAN. The full integration of isochronous and bursty data traffic is obtained with the enhanced version of the protocol, known as FDDI-II. FDDI-II is described by the hybrid ring control (HRC) standard that specifies an upward-compatible extension of FDDI. FDDI-II adds one document, HRC, to the existing four documents that specify FDDI standard. FDDI-II builds on original FDDI capabilities and supports integrated voice, video, and data capabilities but maintains the same transmission rate of 100 Mbps. FDDI-II therefore expands the range of applications of FDDI. FDDI-II supports both packet switched (synchronous and asynchronous) and circuit switched (isochronous) traffic. It can connect high-performance workstations, processors, and mass storage systems with bridges, routers, and gateways to other LANs, MANs, and WANs.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:09 AM

Page 17.111

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.111

FIGURE 17.6.3 FDDI rings.

DQDB and SMDS The IEEE 802 committee perceived the need for high-speed services over wide areas and formed the IEEE 802.6 MAN committee in 1982. The committee reached a consensus to use the DQDB as the standard MAC protocol. A by-product of DQDB is the switched multimegabit data service (SMDS). The DQDB standard is both a protocol and a subnetwork. It is a subnetwork in that it is a component in a collection of networks to provide a service. The term “distributed-queue dual-bus” refers to the use of a dual-bus topology and a MAC technique based on the maintenance of distributed queues. In other words, each station connected to the subnetwork maintains queues of outstanding requests that determine access to the MAN medium. The DQDB subnetwork provides all stations on the dual bus with the knowledge of the frames queued at all other stations, thereby eliminating the possibility of collision and improving data throughput. The DQDB subnetwork has many features, some of which make it attractive for high-speed data services. Such features include:

• Shared media: It extends the capabilities of shared media systems over large geographical areas. • Dual bus: Its use of two separate buses carrying data simultaneously and independently makes it distinct from IEEE 802 LANs.

• High speed: It operates at a variety of data rates, ranging from 34 Mbps to 155 Mbps.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.112

DATA NETWORKS AND INTERNET 17.112

TELECOMMUNICATIONS

FIGURE 17.6.4 FDDI isolates fault without impairing the network.

• Compatibility with legacy LANs: It is compatible with IEEE 802.X LAN standards. A DQDB station should • • • • •

recognize the 16-bit and 48-bit addresses used by IEEE 802.X LAN standards. DQDB is designed to support data traffic under connectionless IEEE 802.2 LLC. Fault tolerance: It is tolerant to transmission faults when the system is configured in a loop. Congestion control: It is based on a distributed queuing algorithm as a way of resolving congestion. Segmentation: Its use of ATM technique allows long variable length packets to be segmented into short fixedlength segments. This provides efficient and effective support for small and large packets and for isochronous data. Flexibility: It uses a variety of media including coaxial cable and fiber optics. It can simultaneously support both circuit switched and packet switched services. Compatibility: It is compatible with current IEEE 802 LANs and future networks such as BISDN.

The DQDB network span of about 50 km, transmission rate of about 150 Mbps, and slot size of 53 bytes allow many slots to be in transit between the nodes. DQDB supports different types of traffic, which may be classified into two categories, isochronous and nonisochronous (asynchronous). The DQDB dual-bus topology is shown in Fig. 17.6.6. As both buses are operational at all times, the capacity of the subnetwork is twice the capacity of each bus. In this network, nodes are connected to two unidirectional buses, which operate independently and propagate in opposite directions as shown in Fig. 17.6.6. Every node is able to send information on one bus and receive on the other bus. The head station (frame generator) generates a frame every 125 ms to suit digitized voice requirement. The frames are continuously generated on

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.113

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.113

FIGURE 17.6.5 Summary of the functions of the FDDI standards.

each bus so that there is never any period of silence on the bus. The frame is subdivided into equal-sized slots. The empty slots generated can be written into by other nodes. The end station (slave frame generator) terminates the forward bus, removes all incoming slots, and generates the same slot pattern at the same transmission rate on the opposite bus. The slots are 53 octets long, the same as ATM cells, to make DQDB MANs compatible to BISDN. SMDS represents the first broadband service to make use of DQDB MAN standard and technologies. The need for a high-speed, connectionless data service that provides both high transmission speed, low delay, and

FIGURE 17.6.6 Open bus topology of DQDB network.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.114

DATA NETWORKS AND INTERNET 17.114

TELECOMMUNICATIONS

a simple, efficient protocol adaptation for LAN interconnection led to a connectionless data service known as switched multisegment data service in the United States or connectionless broadband data service (CBDS) in Europe. SMDS is the first service offering of DQDB. It is a cell-based, connectionless, packet-switched network that focuses on transmitting data and data only. SMDS was developed by Bell Communications Research (Bellcore), the research arm of the seven Bell regional holding companies and popularized by the SMDS Interest Group (SIG). SMDS is not a technology but a service. Although a DQDB can be configured as either a loop bus or an open bus, SMDS uses the open bus topology. SMDS is a connectionless, public, cell-switched data service. The service is connectionless because there is no need for setting up a physical or virtual path between two sites. SMDS offers services characteristically equivalent to LAN MAC. It operates much the same way as a LAN, but over a greater geographical area with a larger number of users. Compared with other competing high-speed technologies such as FDDI, SMDS has no theoretical distance limitation as FDDI. FDDI’s use of tokens limits the perimeter of the FDDI ring to about 60 mi. The data rate of FDDI (100 Mbps) does not match any of the standardized public transmission rate, whereas SMDS is based on standard public network speeds. FDDI will probably be used for high-speed LANs and complement SMDS rather than compete with it.

WIDE AREA NETWORKS A WAN is an interconnected network of LANs and MANs. A WAN connects remote LANs and ties remote computers together over long distances. Computers connected to a WAN are often connected through public networks, such as the telephone system. They can also be connected through leased lines or satellites. WANs are, by default, heterogeneous networks that consist of a variety of computers, operating systems, topologies, and protocols. The largest WANs in existence is the Internet. Because of the long distance involved, WANs are usually developed and maintained by a nation’s public telecommunication companies (such as AT&T in the United States), which offer various communication services to the people. Today’s WANs are designed in the most cost-effective way using optical fiber. Fiber-based WANs are capable of transporting voice, video, and data with no known restriction to bandwidth. Such WANs will remain cutting edge for years to come. There is also the possibilitiy of connecting networks using wireless technologies. Circuit and Packet Switching For a WAN, communication is achieved by transmitting data from the source node to the destination node through a network of intermediate switching nodes. Thus, unlike a LAN, a WAN is a switched network. There are many types of switched networks, but the most common methods of communication are circuit switching and packet switching. Circuit switching is a much older technology than packet switching. Circuit switching systems are ideal for communications that require data to be transmitted in real time. Packet-switching networks are more efficient if some amount of delay is acceptable. Circuit switching is a communication method in which a dedicated path (channel or circuit) is established for the duration of a transmission. This is a type of point-to-point network connection. A switched circuit is maintained while the sender and recipient are communicating, as opposed to a dedicated circuit, which is held open regardless of whether data are being sent or not. The most common circuit-switching network is the telephone system. Packet switching is a technique whereby the network routes individual packets of data between different destinations based on addressing within each packet. A packet is a segment of information sent over a network. Any message exceeding a network-defined maximum length (a set size) is broken up into shorter units, known as packets. Packet-switching is the process by which a carrier breaks up messages (or data) into these segments, bundles, or packets by the source data terminal equipment (DTE) before they are sent. Each packet is switched and transmitted individually through the network and can even follow different routes to its destination and may arrive out of order. Most modern WAN protocols, such as TCP/IP, X.25, and frame relay, are based on packet-switching technologies. Besides data networks such as the Internet, wireless services such as cellular digital packet data (CDPD) employ packet switching.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.115

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.115 Personal Computer

X.25 WAN

DTE

Network Host

PSE Modem Switch DCE

DTE FIGURE 17.6.7 DTEs, DCEs, and PSEs make up an X.25 network.

X.25 For roughly 20 years, X.25 was the dominant player in the WAN packet-switching technology until frame relay, SMDS, and ATM appeared. X.25 has been around since the mid-1970s and so is pretty well debugged and stable. It was originally approved in 1976 and subsequently revised in 1977, 1980, 1984, 1988, 1992, and 1996. It is currently one of the most widely used interfaces for data communication networks. There are literally no data errors on modern X.25 networks. X.25 is a communications packet-switching protocol designed for the exchange of data over a WAN. It is regarded as a standard, a network, or an interface protocol. It is a popular standard for packet-switching networks approved in 1976 by the International Telecommunication Union—Telecommunication Standardization Sector (ITU-T) for WAN communications. It defines how connections between user devices and network devices are established and maintained. X.25 uses a connection-oriented service that ensures that packets are transmitted in order. Through statistical multiplexing, X.25 enables multiple users to share bandwidth, as it becomes available, therefore ensuring flexible use of network resources among all users. X.25 is also an interface protocol in that it spells the required interface protocols that enable a DTE to communicate with data circuitterminating equipment (DCE), which provides access to the network. The DTE-DCE link provides full-duplex multiplexing allowing a virtual circuit to transmit in either direction. X.25 network devices fall into three general categories: DTE, DCE, and packet-switching exchange (PSE). DTE devices are user end systems that communicate across the X.25 network. They are usually terminals, personal computers, or network hosts, and are located on the premises of individual subscribers. DCE devices are carrier’s equipment, such as modems and packet switches, that provide the interface between DTE devices and a PSE and are generally located in the carrier’s facilities. PSEs are switches that compose the bulk of the carrier’s network. They transfer data from one DTE device to another. Figure 17.6.7 illustrates the relationships between the three types of X.25 network devices. The packet assembler/disassembler (PAD) is a device commonly found in X.25 networks. PADs are used when a DTE device is too simple to implement the full X.25 functionality. The PAD is located between a DTE device and a DCE device, and it performs three primary functions: buffering, packet assembly, and packet disassembly. The PAD buffers data sent to or from the DTE device. It also assembles outgoing data into packets and forwards them to the DCE device; this includes adding an X.25 header. Finally, the PAD disassembles incoming packets before forwarding the data to the DTE; this includes removing the X.25 header.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.116

DATA NETWORKS AND INTERNET 17.116

TELECOMMUNICATIONS

A virtual circuit is a logical connection created to ensure reliable communication between two network devices. Two types of X.25 virtual circuits exist:

• Switched Virtual Circuits (SVCs): SVCs are very much like telephone lines; a connection is established, data are transferred, and then the connection is released. They are temporary connections used for sporadic data transfers. • Permanent Virtual Circuits (PVCs): A PVC is similar to a leased line in that the connection is always present. PVCs are permanently established connections used for frequent and consistent data transfers. Therefore, data may always be sent, without any call setup. Maximum packet sizes vary from 64 to 4096 bytes, with 128 bytes being a default on most networks. X.25 users are typically large organizations with widely dispersed and communications-intensive operations in sectors such as finance, insurance, transportation, utilities, and retail. For example, X.25 is often chosen for zero-error tolerance applications by banks involved in large-scale transfers of funds, or by government uses that manage electrical power networks. Frame Relay Frame relay is a simplified form of packet switching (similar in principle to X.25) in which synchronous frames of data are routed to different destinations depending on header information. It is basically an interface used for WAN. It is used to reduce the cost of connecting remote sites in any application that would typically use expensive leased circuits. Frame relay is an interface, a method of multiplexing traffic to be submitted to a WAN. Carriers build frame relay networks using switches. The physical layout of a sample frame relay network is depicted in Fig. 17.6.8. The CSU/DSU is the channel service unit/data service unit. This unit provides a “translation” between the telephone company’s equipment and the router. The router actually delivers information to the CSU/DSU over a serial connection, much like the computer uses a modem, only at a much higher speed. All major carrier networks implement PVCs. These circuits are established via contract with the carrier and typically are built on a flat-rate basis. Although SVCs have standards support and are provided by the major frame relay backbone switch vendors, they have not been widely implemented in customer equipment or carrier networks. Two major frame relay devices are frame relay access devices (FRADs) and routers. Stand-alone FRADs typically connect small remote sites to a limited number of locations. FRAD is also known as frame relay assembler/disassembler. Frame relay routers offer more sophisticated protocol handling than most FRADs. They may be packaged specifically for frame relay use, or they may be general-purpose routers with frame relay software.

Site A

Router

Site B

CSU/DSU

CSU/DSU

Router

Frame Relay network

FIGURE 17.6.8 Physical layout of a typical frame relay network.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.117

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.117

Frame relay is the fastest growing WAN technology in the United States. In North America it is fast taking on the role that X.25 has had in Europe. It is used by large corporations, government agencies, small businesses, and even Internet service providers (ISPs). The demand for frame relay services is exploding, and for two very good reasons—speed and economics. Frame relay is consistently less expensive than equivalent-leased services and provides the bandwidth needed for other services such as LAN routing, voice, and fax.

INTERNET The Internet is a global network of computer networks (or WAN) that exchange information via telephone, cable television, wireless networks, and satellite communication technologies. It is being used by an increasing number of people worldwide. As a result, the Internet has been growing exponentially with the number of machines connected to the network and the amount of network traffic roughly doubling each year. The Internet today is fundamentally changing our social, political, and economic structures, and in many ways obviating geographic boundaries. Internet Protocol Suite The Internet is a combination of networks, including the Arpanet, NSFnet, regional networks such as NY sernet, local networks at a number of universities and research institutions, and a number of military networks. Each network on the Internet contains anywhere from two to thousands of addressable devices or nodes (computers) connected by communication channels. All computers do not speak the same language, but if they are going to be networked they must share a common set of rules known as protocols. That is where the two most critical protocols, transmission control protocol/Internet-working protocol (TCP/IP), come in. Perhaps the most accurate name for the set of protocols is the Internet protocol suite. (TCP and IP are two of the protocols in this suite.) TCP/IP is an agreed-upon standard for computer communication over Internet. The protocols are implemented in software that runs on each node. The TCP/IP is a layered set of protocols developed to allow computers to share resources across a network. Figure 17.6.9 shows the Internet protocol architecture. The figure is by no means exhaustive, but shows the major protocols and application components common to most commercial TCP/IP software packages and their relationship. As a layered set of protocols, Internet applications generally use four layers:

• Application Layer: This is where application programs that use the Internet reside. It is the layer with which end users normally interact. Some application-level protocols in most TCP/IP implementations include FTP, TELNET, and SMTP. For example, FTP (file transfer protocol) allows a user to transfer files to and from computers that are connected to the Internet. • Transport Layer: It controls the movement of data between nodes. TCP is a connection-based service that provides services need by many applications. User datagram protocol (UDP) provides connectionless services. • Internet Layer: It handles addressing and routing of the data. It is also responsible for breaking up large messages and reassembling them at the destination. IP provides the basic service of getting datagrams to their destination. Address resolution protocol (ARP) figures out the unique address of devices on the network from their IP addresses. • Network Layer: It supervises addressing, routing, and congestion control. Protocols at this layer are needed to manage a specific physical medium, such as Ethernet or a point-to-point line. TCP/IP is built on connectionless technology. IP provides a connectionless, unreliable, best-effort packet delivery service. Information is transferred as a sequence of datagrams. Those datagrams are treated by the network as completely separate. TCP sends datagrams to IP with the Internet address of the computer at the other end. The job of IP is simply to find a route for the datagram and get it to the other end. In order to allow gateways or other intermediate

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.118

DATA NETWORKS AND INTERNET 17.118

TELECOMMUNICATIONS

FIGURE 17.6.9 Abbreviated Internet protocol suite.

systems to forward the datagram, it adds its own header, as shown in Fig. 17.6.10. The main things in this header are the source and destination Internet address (32-bit addresses, such as 128.6.4.194), the protocol number, and another checksum. The source Internet address is simply the address of your machine. The destination Internet address is the address of the other machine. The protocol number tells IP at the other end to send the datagram to TCP. Although most IP traffic uses TCP, there are other protocols that can use IP, so one has to tell IP which protocol to send the datagram to. Finally, the checksum allows IP at the other end to verify that

FIGURE 17.6.10 IP header format (20 bytes).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.119

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.119

the header was not damaged in transit. IP needs to be able to verify that the header did not get damaged in transit, or it could send a message to the wrong place. After IP has tacked on its header, the message looks like what is in Fig. 17.6.10. Addresses and Addressing Scheme For IP to work, every computer must have its own number to identify itself. This number is called the IP address. You can think of an IP address as similar to your telephone number of postal address. All IP addresses on a particular LAN must start with the same numbers. In addition, every host and router on the Internet has an address that uniquely identifies it and also denotes the network on which it resides. No two machines can have the same IP address. To avoid addressing conflicts, the network numbers have been assigned by the InterNIC (formerly known simply as NIC). Blocks of IP addresses are assigned to individuals or organizations according to one of three categories— Class A, Class B, or Class C. The network part of the address is common for all machines on a local network. It is similar to a postal zip code that is used by a post office to route letters to a general area. The rest of the address on the letter (i.e., the street and house number) are relevant only within that area. It is only used by the local post office to deliver the letter to its final destination. The host part of the IP address performs this same function. There are five types of IP addresses:

• Class A format: 126 networks with 16 million hosts each; an IP address in this class starts with a number between 0 and 127

• Class B format: 16,382 networks with up to 64K hosts each; an IP address in this class starts with a number between 128 and 191

• Class C format: 2 million networks with 254 hosts each; an IP address in this class starts with a number between 192 and 223

• Class D format: Used for multicasting, in which a datagram is directed to multiple hosts • Class E format: Reserved for future use The IP address formats for the three classes are shown in Fig. 17.6.11. IPv6 Most of today’s Internet uses Internet Protocol Version 4 (IPv4), which is now nearly 25 years old. Because of the phenomenal growth of the Internet, the rapid increase in palmtop computers, and the profusion of smart cellular phones and PDAs, the demand for IP addresses has outnumbered the limited supply provided by IPv4. In response to this shortcoming of IPv4, the Internet Engineering Task Force (IETF) approved IPv6 in 1997.

FIGURE 17.6.11 IP Address formats.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.120

DATA NETWORKS AND INTERNET 17.120

TELECOMMUNICATIONS

FIGURE 17.6.12 IPv6 header format.

IPv4 will be replaced by Internet Protocol Version 6 (IPv6), which is sometimes called the Next Generation Internet Protocol (or IPng). IPv6 adds many improvements and fixes a number of problems in IPv4, such as the limited number of available IPv4 addresses. With only a 32-bit address field, IPv4 can assign only 232 different addresses, i.e., 4.29 billion IP addresses, which are inadequate in view of rapid proliferation of networks and the two-level structure of the IP addresses (network number and host number). To solve the problem of severe IP address shortage, IPv6 uses 128-bit addresses instead of the 32-bit addresses of IPv4. That means IPv6 can have as many as 2128 IP addresse, which is roughly 3.4 × 1038 or about 340 billion billion billion billion unique addresses. The IPv6 packet consists of the IPv6 header, routing header, fragment header, the authentication header, TCP header, and application data. The IPv6 packet header is of fixed length, whereas the IPv4 header is of variable length. The IPv6 header consists of 40 bytes as shown in Fig. 17.6.12. It consists of the following fields:

• Version (4 bits): This is the IP version number, which is 6. • Priority (4 bits): This field enables a source to identify the priority of each packet relative to other packets from the same source.

• Flow Label (24 bits): The source assigns the flow label to all packets that are part of the same flow. A flow may be a single TCP connection or a multiple of TCP connections.

• Payload Length (16 bits): This field specifies the length of the remaining part of the packet following the header.

• Next Header (8 bits): This identifies the type of header immediately following the header. • Hop Limit (8 bits): This is to set some desired maximum value at the source and the field denotes the remaining number of hops allowed for the packet. It is decremented by 1 at each node the packet passes and the packet is discarded when the hop limit becomes zero. • Source Address (128): The address of the source of the packet. • Destination Address (128 bits): The address of the recipient of the packet. There are three types of IPv6 addresses: 1. Unicast is used to identify a single interface. 2. Anycast identifies a set of interfaces. A source may use an anycast address to contact any node from a group of nodes. 3. Multicast identifies a set of interfaces. A packet with a multicast address is delivered to all members of the group. IPv6 is expected to gradually replace IPv4, with the two coexisting for a number of years during a transition period. IPv6 may be most widely deployed in mobile phones, PDAs, and other wireless terminals in the future.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.121

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

17.121

BISDN AND ATM ISDN is a high-speed communication network, which allows voice, data, text, graphics, music, video, and other source material to be transmitted simultaneously across the world using end-to-end digital connectivity. ISDN stands for Integrated Services Digital Network. “Digital network” means that the user is given access to a telecom network ensuring high-quality transmission via digital circuits, while “integrated services” refers to the simultaneous transmission of voice and data services over the same wires. This way, computers can connect directly to the telephone network without first converting their signals to an analog form using modems. This integration brings with it a host of new capabilities combining voice, data, fax, and sophisticated switching. And because ISDN uses the existing local telephone wiring, it is equally available to home and business customers. ISDN was intended to eventually replace the traditional plain old telephone service (POTS) phone lines with a digital network that would carry voice, data, and video. ISDN service is available today in most major metropolitan areas and probably will be completely deployed throughout the United States very soon. Many ISPs now sell ISDN access. However, the idea of using existing copper wiring to provide this network decreased ISDN capabilities in reality. When the digital video systems started to develop in the 1980s, it was soon noticed that the maximum bandwidth (2.048 Mbps) of the ISDN is not enough. That is why broadband ISDN (BISDN) was born. BISDN is a digital network operating at data rates in excess of 2.048 Mbps—the maximum rate of standard ISDN. BISDN is a second generation of ISDN. BISDN is not only an improved ISDN but also a complete redesign of the “old” ISDN, now called narrowband ISDN. It consists of ITU-T communication standards designed to handle high-bandwidth applications such as video. The key characteristic of broadband ISDN is that it provides transmission channels capable of supporting rates greater than the primary ISDN rate. Broadband services are aimed at both business applications and residential subscribers. BISDN’s foundation is cell switching, and the international standard supporting it is Asynchronous Transfer Mode (ATM). Because BISDN is a blueprint for ubiquitous worldwide connectivity, standards are of the utmost importance. Major strides have been made in this area by the International Telecommunications UnionTelecommunications (ITU-T) during the past decade. More recently, the ATM Forum has advanced that agenda. ATM is a fast packet-oriented transfer mode based on asynchronous time-division multiplexing. The words transfer mode say that this technology is a specific way of transmitting and switching through the network. The term asynchronous refers to the fact that the packets are transmitted using asynchronous techniques (e.g., on demand), and the two end points need not have synchronized clocks. ATM will support both circuit switched and packet switched services. ATM can handle any kind of information, i.e., voice, data, image, text, and video in an integrated manner. An ATM network is made up of an ATM switch and ATM end points. An ATM switch is responsible for cell transit through an ATM network. An ATM end point (or end system) contains an ATM network interface adapter. Examples of ATM end points are workstations, routers, digital service units (DSUs), LAN switches, and video coder-decoders (CODECs). An ATM network consists of a set of ATM switches interconnected by point-to-point ATM links or interfaces. ATM switches support two primary types of interfaces: user-network interface (UNI) and network-network interface (NNI). The UNI connects ATM end systems (such as hosts and routers) to an ATM switch. The NNI connects two ATM switches. In ATM the information to be transmitted is divide into short 53 byte packets or cells. There are reasons for such a short cell length. First, ATM must deliver real-time service at low bit rates. Thus the size allows ATM to carry multiple forms of traffic. Both time-sensitive traffic (voice) and time-insensitive traffic (data) can be carried with the best possible balance between efficiency and minimal packetization delay. Second, using short, fixed-length cells allows for time-efficient and cost-effective hardware such as switches and multiplexers. Each ATM cell consists of 48 bytes for information field and 5 bytes for header. The header is used to identify cells belonging to the same virtual channel and thus used in appropriate routing. The ATM cell structure is shown in Fig. 17.6.13. The cell header comes in two forms: the UNI header and the NNI header. The UNI is described as the point where the user enters the network. The NNI is the interface between networks. The typical header therefore looks like that shown in Fig. 17.6.14 for the UNI. The header is slightly different for NNI, as shown in Fig. 17.6.15. ATM is connection-oriented and connections are identified by the virtual channel identifier (VCI). A virtual channel (VC) represents a given path between the user and the destination. A virtual path (VP) is created by multiple virtual channels heading to the same destination. The relationship between virtual channels and

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.122

DATA NETWORKS AND INTERNET 17.122

TELECOMMUNICATIONS

b octats

4b octats

Header

Information field

FIGURE 17.6.13 ATM cell structure.

BIT 8

7

6

5

4

3

2

1

GFC

VPI

1

VPI

VCI

2 3

VCI CLP

PT

VCI

OCTET

4 5

HEC

VPI

virtual path identifier

PT

payload type

VCI

virtual channel indentifier

CLP

cell loss priority

HEC

header error control

GFC

ganaric flow control

FIGURE 17.6.14 ATM cell header for UNI.

BIT 8

7

6

5

4

3

2

1

VPI

1

VPI

VCI

2

VCI

3 PT

VCI

CLP

OCTET

4 5

HEC

VPI

virtual path identifier

PT

payload type

VCI

virtual channel indentifier

CLP

cell loss priority

HEC

header error control

FIGURE 17.6.15 ATM cell header for NNI.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.123

DATA NETWORKS AND INTERNET DATA NETWORKS AND INTERNET

VC

VP

VC

VP

VC

VP

Transmission Path

VP

VC

VP

VC

VP

VC

17.123

FIGURE 17.6.16 Relationship between virtual channel, virtual path, and transmission path.

virtual paths is illustrated in Fig. 17.6.16. A virtual channel is established at connection time and torn down at termination time. The establishment of the connections includes the allocation of a virtual channel identifier and/or virtual path identifier (VPI) and also includes the allocation of the required resources on the user access and inside the network. These resources, expressed in terms of throughput and quality of service (QoS), can be negotiated between user and network either before the call set up or during the call. Having both virtual paths and channels make it easy for the switch to handle many connections with the same origin and destination. ATM can be used in existing twisted pair, fiber-optic, coaxial, and hybrid fiber/coax (HFC), SONET/SDH, T1, E1, T3, E3, E4, and so on, for LAN and WAN communications. ATM is also compatible with wireless and satellite communications. Figure 17.6.17 depicts the architecture for the BISDN protocol. It is evident that the BISDN protocol uses a three-plane approach. The user plane (U-plane) is responsible for user information transfer including flow control and error control. The U-plane contains all of the ATM layers. The control plane (C-plane) manages the call-control and connection-control functions. The C-plane shares the physical and ATM layers with the U-plane, and contains ATM adaptation layer (AAL) functions dealing with signaling. The management plane (M-plane) includes plane management and layer management. This plane provides the management functions and the capability to transfer information between the C- and U-planes. The layer management performs layerspecific management functions, while the plane management deals with the complete system.

FIGURE 17.6.17 BISDN protocol reference model.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.124

DATA NETWORKS AND INTERNET 17.124

TELECOMMUNICATIONS

Figure 17.6.17 also shows how ATM fits into BISDN. The ATM system is divided into three functional layers, namely, the physical layer, the ATM layer, and the ATM adaptation layer. BISDN access can be based on a single optical fiber per customer site. A variety of interactive and distribution broadband services is contemplated for BISDN: high-speed data transmission, broadband video telephony, corporate videoconferencing, video surveillance, high-speed file transfer, TV distribution (with existing TV and/or high-definition television), video on demand, LAN interconnection, hi-fi audio distribution, and so forth.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.125

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 17.7

TERMINAL EQUIPMENT C. A. Tenorio, E. W. Underhill, J. C. Baumhauer, Jr., L. A. Marcus, D. R. Means, P. J. Yankura, Herbert M. Zydney, R. M. Sachs, W. J. Lawless

TELEPHONES C. A. Tenorio, E. W. Underhill, J. C. Baumhauer, Jr., L. A. Marcus, D. R. Means, P. J. Yankura Telephone equipment ranges from the familiar desk or wall telephone set to the versatile communications system terminal of the information age. Telecommunications has merged with compute technologies in telephones to make available the entire spectrum of voice, data, video, and graphics. Terminal equipment now allows the exchange of this information over the telephone network.

The Telephone Set The basic functions of the telephone set include signaling, alerting, central office supervision, and transmission of voice communications. In a typical call sequence, when the caller (the near-end party) picks up the handset, the telephone draws loop current (the telephone line is known as a loop) from the central office battery, which signals the central office (CO) that it wants service. The loop current also provides power for telephone functions. The caller then dials, sending address signals to the central office by either pulse or tone dialing. The CO collects the address signal in registers and sets up a transmission path with the CO for the number being called. The called CO sends an alerting signal to the called telephone, causing it to ring. When the called or far-end party picks up their handset, loop current is drawn signaling the CO to trip (interrupt) ringing and complete the talking circuit. The functional elements of traditional telephones (Fig. 17.7.1) include a carbon transmitter to convert acoustic energy to an electrical voice signal, an electromagnetic receiver to convert the electrical voice signal back into acoustic energy, a switch hook to turn the telephone on and off, rotary dial contacts, which make and break loop current, a loop-equalizer circuit to compensate for loop resistance, a balance circuit, a hybrid transformer for coupling the transmitter and receiver to the telephone line, and an electromechanical ringer. The two-wire telephone line connections are known as Tip and Ring. The loop equalizer, balance circuit, and hybrid transformer are collectively known as the speech network. Such traditional speech networks are called passive networks. Electronic speech networks using solid-state components are called active networks. The ringer is shown bridged across the telephone line. The capacitor C1 blocks the flow of loop current through the ringer. Resistor R1 and varistor V1 constitute the loop-equalizer circuit. On long loops with low loop current, varistor V1 maintains a high resistance and takes little current away from the rest of the speech network. On short loops, higher levels of loop circuit result in a lower resistance of V1, thereby reducing the transmit and receive levels. 17.125 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.126

TERMINAL EQUIPMENT 17.126

TELECOMMUNICATIONS

FIGURE 17.7.1 Traditional passive network telephone set.

The combination of a three-winding hybrid transformer and impedance balancing circuitry provides the means of coupling the transmitter and receiver to the loop independently. This is called an antisidetone network. Sidetone is that portion of the transmitted signal that is heard in the receiver while talking. Sidetone is subjectively desirable because it provides the live quality of face-to-face conversation. The antisidetone network is designed to provide a sidetone signal at about the same level as received speech. If the sidetone level is too high, the talker tends to speak softly to keep the sidetone level pleasant, which results in signal strength too low for good transmission. If the sidetone level is too low, the talker perceives the telephone as dead or inoperative. Incoming voice signals from the telephone loop are transformer coupled to the receiver. The induction voltages are such that most of the incoming signal power is delivered to the receiver with little power to the balance network. Outgoing voice signals generated by the transmitter induce voltages in two of the transformer windings that cancel each other, so that most of the signal power is divided between the balance circuit resistor R2 and loop impedances with little to the receiver. The choice of impedance and turns ratios provides a compromise in sidetone balance and impedance matching to the telephone line. Capacitor C2 prevents dc power from being dissipated in R2. Varistor V2 helps match the balance circuit impedance to the loop impedance. The main advantage of an active over a passive network is its smaller physical volume, lower cost, and greater versatility. An active network also provides power gain, thus allowing the use of microphones such as electrets. In the active network the gain of the transmit and receiver amplifiers can be automatically adjusted, depending on the loop current, to provide loop equalization. A basic active network is shown in Fig. 17.7.2. The base of Q1 is returned to common (at voice frequencies) by capacitor C1. The emitter of Q1 is virtual ground, since its low base impedance is divided by the transistor’s beta. A received signal appearing on the telephone line is routed to the receiver through the voltage divider consisting of R2 and R3; R2 is connected to a virtual ground. The other end of the receiver is returned to common through the low output impedance of the transmit amplifier. The transmit signal is first amplified by the amplifier and further amplified by transistor Q1. The voltage gain of this common-base state is determined by the input impedance of the telephone line and the impedance of R1 in parallel with C 2. The antisidetone balance is achieved by adjusting the FIGURE 17.7.2 Typical active-network circuit. voltage divider (R2 and R3) to compensate for the gain of the common-base stage, leaving about the same potential at both Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.127

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.127

FIGURE 17.7.3 Loop simulator.

ends of the receiver. Capacitor C2 is added to minimize any phase shift through this stage caused by the capacitance of the telephone line. Transmission Terminal equipment is part of a transmission and signaling circuit set up by a network provider for voice frequency transmission in the range of 300 to 3300 Hz. Four important characteristics a telephone must have in order to work properly in this circuit are dc resistance (for dc powering and loop supervision), ac impedance at 1000 Hz, signal power level at both the receiver and transmitter, and audio frequency response. A simple loop simulator is shown in Fig. 17.7.3. A ring generator (86 V, 20 Hz, 400 Ω), present only during ringing, is not shown. DC power is provided by a nominal 48-V battery. Loop current is limited by the resistance of relay coils or current limiting circuits in the central office, and by the resistance of the loop itself. Maximum loop resistance is 1300 Ω, which is 15,000 ft of 26 AWG cable. With a 300-Ω telephone, loop current is about 20 to 80 mA. At 1000 Hz the transmission cable has a characteristic impedance of 600 Ω, which the telephone should match for maximum energy transfer and minimum echo. Typical signal levels are shown in Fig. 17.7.4. Desirable sound power at the receiver was determined by subjective testing of people. The transmitter, while converting acoustic power into electrical energy, must also amplify the energy by about 20 dB to compensate for the 20-dB loss in converting the electrical energy back into acoustic power. The loop resistance provides an attenuation of 2.8 dB/m for 26 AWG cable. Today, virtually all central offices’ trunks are digital, so there is no transmission loss between them. Telephone receivers and transmitters are designed to achieve desired frequency characteristics. For example, telephone handset microphones have a rising response characteristic near 3 kHz to compensate for capacitive shunting loss in the loop and simulate the effects of acoustic diffraction about the human head that is present in face-to-face conversation.

FIGURE 17.7.4 Transmission circuit signal levels.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.128

TERMINAL EQUIPMENT 17.128

TELECOMMUNICATIONS

Multifunction Electronic Telephones Most new telephones are electronic. Conventional components, such as the bell ringer, the hybrid transformer, and the mechanical dial (rotary or push button) are replaced by active electronic devices that are incorporated into largescale integrated (LSI) circuits. A typical dial-pulse electronic telephone set contains at tone ringer, an active network, an electronic dial keypad, a dial-pulsing transistor, and a low-power microphone. Several advantages result. The overall reliability of the telephone increases, automated manufacturing assembly is possible, telephone-set weight and size are reduced, and finally, overall transmission performance of the telephone is improved. A diagram of a typical microcomputer-controlled multifunction electronic telephone is given in Fig. 17.7.5. Telephone features include last-number redial, repertory dialing, dial-tone detectors, speakerphone, integrated answer/record, hold, conference (for two-line telephones), and display of the dialed digits. Repertory dialing permits the user to store several telephone numbers in an auxiliary memory for automatic dialing. The dial circuit can produce pulse dialing or dual-tone multifrequency (DTMF) tones. The architecture often includes both general-purpose and custom LSI circuits, such as DTMF generator chips, clock (timer) chips, and display-driver chips, or it may contain one very large-scale integrated (VLSI) circuit. The microcomputer controls the operation of the various LSI circuits. The microcomputer receives information from the ringing detector, the dial keypad, function buttons, the electronic line switch, and the active network, and controls such items as the tone generator (ringer), the integrated answer/record system, the dial circuits, the display, and the speakerphone. Electronic logic performs a variety of common switching functions, such as switching out the transmitter and lowering the gain to the receiver during dialing, functions performed by mechanical switches in traditional telephone sets. The line switch may also be electronic. The switch hook, rather than closing line circuit contacts when the handset is lifted, turns on a solid-state line switch. The user can also turn the telephone on and off electronically without having to lift the handset. Speakerphone, answer/record, and hold operations are common uses for an electronic line switch. The media for message storage is either tape or solid-state electronic memory. Audio storage on tape uses standard tape deck recording and playback techniques. Storage in solid-state memory requires conversion to a digital format with the use of a CODEC. This digital data is stored and retrieved under control of a microprocessor. To further conserve relatively expensive memory, a DSP can be used to massage the data. Dead time is removed, and various compression algorithms are used to conserve memory. Here a trade-off is made between the amount of memory needed and the quality of the speech desired.

FIGURE 17.7.5 Multifunction electronic telephone.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.129

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.129

Speakerphones must pick up much lower speech signals than a handset transmitter and must generate more acoustic energy than a handset receiver can, so it often requires more power than can be drawn over the telephone line. To prevent feedback from the loudspeaker to the microphone, the loudspeaker is muted when speech (or noise) is detected. Because the microphone must be very sensitive to pick up ordinary conversation it also picks up room reverberation, which causes speech to sound as if the speaker is in a tin can. New highly directional microphones can minimize unwanted echo and produce more natural speech. Cordless Telephones In a cordless telephone, the usual telephone functions are performed over a radio link, thereby eliminating the handset cord and providing the user with added mobility. A cordless telephone block diagram (Fig. 17.7.6) shows a portable handset unit, used for talking, listening, dialing, and ringing, and a fixed base unit, used for interfacing between the telephone line and radio link. More sophisticated applications include units with duplex intercoms and base units with integrated speakerphones or telephone answer/record devices.

FIGURE 17.7.6 Cordless telephone: (a) base unit and (b) handset unit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.130

TERMINAL EQUIPMENT 17.130

TELECOMMUNICATIONS

The normal listening and talking signals as well as ringing and DTMF signaling are transmitted over a radiofrequency link as a frequency-modulated carrier. This carrier is centered in one of ten 20-kHz-wide channels with the base-to-handset channels in the 46.6- to 47-MHz band and the handset-to-base channels in the 49.6- to 50-MHz band. Before 1983 the base-to-handset link used the 1.6- to 1.7-MHz band and there were only five FCC allocated channels. As cordless telephones became more popular, interference between people using the same channel became common. To minimize the probability of hearing conversations from nearby cordless telephones on the same channel, the FCC limits the maximum radiated field strength to 10,000 (V/m at 3 m, which allows satisfactory cordless telephone performance up to 1000 ft from the base station under ideal conditions. To minimize interference, some telephones scan the available channels and choose any vacant channel found. Typical cordless telephones employ full duplex signaling between handset and base using frequency shift keying (FSK) modulation of the carrier. By embedding a digital code in all exchanges of information between handset and base, it is possible to virtually eliminate false operations and ringing. A security code is also embedded to prevent access of the base unit by a neighbor’s handset. Increasing user traffic in the 46/49 MHz frequency bands has caused renewed congestion. Therefore, the FCC has allowed use of one of the commercial bands with frequencies ranging from 902 to 928 MHz, using either analog or digital modulation schemes. Units using digital communication provide a higher degree of security because they do not allow simple FM demodulation by scanners or FM receivers, thereby preventing eavesdropping. This band also has less congestion and allows better RF propagation throughout the usable area, decreasing noise and interference. Video Telephones New video telephones provide motion video in color over the public switched telephone network. Advancements made in audio and video compression algorithms have made this possible. See Early et al. (1993). Previous video systems required a special transmission line, making them suitable only for business users. The video telephone first establishes a talking connection with another video telephone, then video mode is entered by pressing a “video” button on each telephone. In “video” mode compressed audio and video signals are transmitted between the two telephones. The full-duplex DSP-based modem uses a four-dimensional, 16-state trellis code and supports data rates of 19.2 and 16.8 kb/s. The 19.2 kb/s mode transmits 6 bits per baud at 3200 baud, and the 16.8 kb/s mode transmits 6 bits per baud at 2800 baud. The modem will automatically drop back to the slower speed if the error rate indicates that the connection will not support the higher speed. The speech compression is achieved using a code-excited linear prediction (CELP) algorithm that has been enhanced by incorporating fractional-pitch prediction and constrained stochastic excitation. The speech encoder operates at 6.8 kb/s. The video signal is preprocessed to produce three separate video frames. One frame contains the luminance information with a resolution of 128 pixels by 112 lines. The other two frames contain chrominance information with resolutions of 32 pixels by 28 lines. The frames are each segmented into 16 by 16 blocks for processing. A motion estimator compares the block being processed with blocks from the previous frame, and generates an error signal based on the best match. This signal is converted to the frequency domain using the discrete cosine transform. The transformed signal is quantized, encoded, and sent to the output buffer for transmission. The transmitted frame rate is 10 frames per second using an overlay scan instead of a TV raster scan. If there is a large amount of motion, the frame rate is reduced so that the output buffer does not overflow. A button on the telephone can be used to adjust the frame rate transmitted from the distance set. However, a higher frame rate decreases the resolution of the received image. A self-view mode is also provided to allow the nearend party to view the image that is being transmitted to the far-end party. Bell and Tone Ringers In traditional telephones an electromechanical bell ringer is used to alert the customer to an incoming call. The typical ringer has two bells of different pitch that produce a distinctive sound when struck by a clapper driven by a moving-armature motor. The ringer coil inductance in series with a capacitor resonates at 20 Hz to provide a low-impedance path for a 20-Hz ringing signal. The high inductance of the ringer coil prevents loading of the speech network or DTMF generator when the handset is off-hook.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.131

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.131

Other ringer connections are used when customers are on party lines and must be rung selectively. Selective ringing schemes include the connection of the ringer between the tip or ring conductors and ground, or ringers tuned to different ringing frequencies (16 to 68 Hz). Electronic tone ringers are used in most new telephones. A resistor-capacitor circuit is bridged to the telephone line to provide the proper input impedance (defined by FCC rules) for a ring-detect chip. Tone ringers can have equivalent effectiveness and acceptability to the customer when compared to bell ringers if acoustic spectral content and loudness are adequate. Typically, the tone ringer consists of a detector circuit, which distinguishes between valid ringing signals and transients on the telephone line, and a tone generator and amplifier circuit that drives an efficient electroacoustic transducer. The transducer may be a small ordinary loudspeaker, a modified telephone receiver, or a special piezoelectric-driven sounder (see Fig. 17.7.10). Tone and Pulse Dialers Dial-pulse signaling interrupts the telephone line current with a series of breaks. The number of breaks in a string represents the number being dialed; one break is a 1 and 10 breaks is a 0. These breaks occur at a nominal rate of 10 pulses per second, with 600 ms between pulse trains. The ratio of the time the line current is broken to the total cycle time (percent break) is nominally 61 percent. Dial pulse signaling can be used with all central offices. The mechanical rotary dial in the traditional telephone uses a single cam to open and close the dial contacts. The dial is driven by a spring motor that is wound up by the user as each digit is dialed. The return motion is controlled by a speed governor mechanism to maintain the proper dial-pulsing rate. DTMF signaling consists of sending simultaneously two audio frequencies of at least 50-ms duration representing a single number, separated by at least 45-ms intervals between numbers. On the standard 4-by-3 dial format, each column and each row is associated with a different frequency, as shown in Fig. 17.7.7. This method of signaling permits faster dialing for the user and more efficient use of the switching systems. Since the frequencies are in the audio band, they can be transmitted throughout the telephone network. Pushbutton dials originally used for DTMF signaling were laid out in a rectangular format to accommodate the cranks and levers necessary to operate the mechanical switches. With modern electronic push-button dials any layout can be used, but the 4-by-3 format is still popular for its dialing speed. Pushbutton dials can perform the dial-pulse function electronically. These electronic “rotary dials” interrupt the line current with transistors or relays. Since the user can enter a number into the dial faster than the number can be pulsed out, a first-in, first-out memory is used to store the number as it is dialed. The dial-pulse timing is generated using an internal clock. Several methods have been used to generate DTMF signals. Early methods used an inductor-capacitor oscillator with two independently tuned transformers. Different values of inductances are switched into the circuit to obtain different FIGURE 17.7.7 Basic arrangement of pushbuttons frequencies. Another method is a resistor-capacitor oscillator. for dual-frequency dialing. Here a twin-tee notch filter in the feedback loop of a highgain amplifier gives the desired frequency. Two amplifier-filter units are used, one for each frequency group. A modern method to generate DTMF signals uses CMOS integrated circuits to use digit synthesis techniques (Fig. 17.7.8). The keypad information is combined with a master clock to generate the desired frequency. This information is fed to a D/A converter, whose output is a stair-step waveform. The waveform is filtered and fed to a driver circuit, which provides the desired sine-wave frequency signals to the telephone line. The latest method used to generate DTMF signals is in software. If a digital speech processor is available in the product, a subroutine can be written to generate the appropriate DTMF waveform. The output is a digital word that is periodically fed to a CODEC for conversion to an analog signal. This is fed to a buffer amplifier that drives the telephone line.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.132

TERMINAL EQUIPMENT 17.132

TELECOMMUNICATIONS

FIGURE 17.7.8 Digital-synthesis circuit.

To combine the versatility of both pulse and DTMF signaling, many telephones have the ability to switch between the two systems. This permits the user to use pulse dialing for making the telephone call and then switch to DTMF for end-to-end signaling for services such as bank-by-phone. Microphones The granular carbon microphone, often called a transmitter in telephony, dates back over 100 years to the birth of telephony. Sound striking the diaphragm imparts a pressure fluctuation on the carbon aggregate (Fig. 17.7.9a). Since granule contact force and dc resistance R0 are inversely related, a modulation of the telephone loop current I0 results. Carbon transmitters offer 20 to 25 dB of inherent signal-power gain and have a nonlinear input/output relationship that advantageously favors speech over low level background noise, but their low resistance consumes loop power. Electronic telephones require a microphone that consumes much less power. The electret microphone is a small capacitive microphone that is widely used today. It has low sensitivity to mechanical vibrations, a low power requirement, and high reliability. An electret has no inherent gain, so requires a preamplifier. An effective bias voltage V0 depends on a polymer diaphragm’s trapped electret charge (Fig. 17.7.9b). The piezoelectric ceramic unimorph element (Fig. 17.7.10) is used as a microphone in cordless and cellular handsets. Its piezoelectric activity owes to an electrically polarized, synthetic (as opposed to natural crystal) ferroelectric ceramic. The flexural neutral axis of the structural composite is not at the ceramic’s midplane; thus, vibration results in variation of the ceramic’s diameter that induces a voltage across its thickness as defined through the piezoelectric constant d31. Receivers The receiver converts the electrical voice signal back into acoustic sound pressure. Either one of the following designs is often used. The electromagnetic (moving-iron) receiver uses voice coil currents to modulate the dc flux,

FIGURE 17.7.9 Microphones: (a) Carbon and (b) electret.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.133

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

FIGURE 17.7.10 ic unimorph.

Piezoelectric ceram-

17.133

which produces variable force on the iron armature (Fig. 17.7.11a). The electrodynamic (moving-coil) receiver uses coil current perpendicular to the dc magnetic field to generate an axial force on the movable coil (Fig. 17.7.11b). The constant (dc) reluctance of the coil air gap results in less distortion than in the electromagnetic receiver. The moving-coil unit has a lower, more nearly real impedance compared with the moving-iron design. Both of these permanent magnet devices can be used as a microphone, since their input/output is reversible. Certain hearing aids can inductively couple to the leakage flux from receivers that generate an external magnetic filed. Some receiver types (e.g., piezoelectric) must have an induction coil added to the design to provide an adequate magnetic field. Most telephones are required to provide a magnetic field (see Public Law 100-394, 1988) in order to provide the hearing impaired with access to the telephone network.

Handsets The handset holds the microphone and receiver. It may also contain a line switch, dial keypad, and other circuits to make it a complete one-piece telephone. The handset positions the transmitter in proper location with respect to the mouth when the receiver is held on the ear. Standard dimensions for the relative mouth and ear locations have been defined (Fig. 17.7.12) for a head that is statistically modal for the population. The handset should provide an acoustic seal to the ear, provide proper acoustic coupling for its transmitter and receiver, and be heavy enough to operate the telephone switch hook when placed on it. Handsets for hearing impaired users may also contain an amplifier and a volume control.

Protection The user must be protected against contact with ringing voltages, lightning surges, and test signals applied to the telephone line. Telephone service personnel are trained to work on live telephone lines, but users are not. Lightning surges that are induced onto the telephone might be 1000 V peak but are of limited energy (Fig. 17.7.13). (See Carroll and Miller, 1980). Test signals applied from the central office can be up to 202 V dc. Telephone cables, often strung on the same poles as power cables, are subject to power crosses if the power cable breaks (as in a storm) and falls on the telephone cables. The telephone company installs

FIGURE 17.7.11 (moving coil).

Receivers: (a) central armature magnetic (moving armature) and (b) dynamic

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.134

TERMINAL EQUIPMENT 17.134

TELECOMMUNICATIONS

FIGURE 17.7.12 Handset modal dimensions.

primary protectors at building entrances to shunt voltages from power crosses and direct lightning strikes to ground. Extraneous signals can cause very high acoustic output from the receiver. Either a varistor, V3 in Fig. 17.7.1, placed directly across the receiver terminals, or the telephone’s circuit design is used to limit maximum acoustic output to 125 dBA. Finally, the telephone network itself needs protection from excessive signal power, fault voltages, and other disturbances caused by the terminal equipment. Requirements for telephone network protection are contained in FCC Rules and Regulations, Part 68.

TELEPHONES WITH ADDED FUNCTIONS Herbert M. Zydney, R. M. Sachs Key Telephone Sets Key telephones are designed for users who need access to more than one central-office (CO) line or PBX extension. In almost all cases, this is accomplished by the addition or illuminated keys (hence the term “key”

FIGURE 17.7.13 Induced lightning energy distribution.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.135

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.135

telephone) to a telephone instrument. These are arranged to correspond to the line or extension and are generally illuminated to identify their status, either directly or by adjacent LEDs or LCDs. The keys can be operated independently to select the desired line. To permit switching between two calls, a hold key is provided so that the active call can be maintained in a “hold” state before operating the key associated with a different call. To allow the user to be alerted, a common audible ringer is provided, which sounds when a new call occurs on any line. Where the telephone is used as a part of a self-contained system, a key is also included so that internal, or “intercom,” calls can be made between individual key telephone sets. As technology has evolved, more features have been included in key telephones that improve the efficiency of telephone system operation. Because these newer systems are software controlled, a number of the features are not fully resident in the telephone itself, but depend on a distributed architecture to fully implement them. Examples include: (a) Memory Dialing: Prestored telephone numbers can be dialed at either the touch of one button or by an abbreviated code from the dial pad. (b) Speaker/Microphone Services: A loudspeaker powered at the station can support hands-free dialing or, when connected to the telephone channel, permit hands-free intercom services or speaker phone operations. (c) Display-Based Features: The most advanced key telephone sets offer a display of either numeric or alphanumeric characters, sometimes augmented by graphic symbols. In conjunction with system software support, these permit users to identify who is calling them, determine the status of other telephones in the same system, and retrieve messages such as the intercom extensions of unanswered calls. The technology used to implement key telephones presently in use is quite varied. Four categories are worth singling out: Electromechanical Key Telephone Sets. These early-style telephones generally use electromechanical keys with a relatively large number of electrical contacts for most functions. Individual CO lines and control leads are brought to each telephone, and the actual switching of lines occurs within the telephone set. Additional wires provide control of illumination and ringing. They rely on external line-holding relay circuits and power for their operation, although standard telephone operation is possible on a stand-alone basis. A common hold button activates the external relay holding bridge for all telephones and is released when any other telephone activates its line key. Many key stations are arranged to offer visual and audible signals for status identification using varying rates of interruption for different states. For ease of installation, dedicated wires are usually grouped into 25-pair (50-conductor) cables with a standard connector. Some of the larger telephones, with dozens of line keys, can have four or more such connectors. Key Telephone Sets with Internal Control. By adding electronic circuitry within the key telephone, many of the functions accomplished by the external relay circuitry can occur internally. Every CO line is brought to each telephone. Internally, ringing signals can be sensed and holding bridges can be applied. Complementary electronics in other telephones can sense the holding bridges and provide the necessary visual signals for multiple line states and access. Because of limitations associated with the power and sensitivity of CO lines, these key sets generally are limited to two, or at the most, four lines. External power, often rectified in the set, is required for the larger systems; smaller systems can operate from the power provided over CO loops. Electronic Key Telephone Sets. Electronic sets (see Silverio et al., 1985) are different from the first two classes because the CO lines terminate in a centralized switch (often called a key service unit or control unit) rather than in the telephone itself. One, or sometimes two, voice paths are terminated in the telephone itself. A separate digital data link between the telephone and the control unit is used to exchange information, including what keys have been pressed and what visual and audible status information is to be presented to the user. There is little standardization in the functional definition of the conductor pairs. The voice pair generally operates at 600 Ω or 900 Ω, although the actual transmission levels may differ from standard loop levels. The digital data link may be provided on either one or two pairs and can operate from less than 1 to over 200 kb/s. Power is derived in a number of ways: the simplest is to dedicate one pair to power; alternatively, the power may be impressed on a center tap of paired transformers carrying either voice or data and removed and filtered in the telephones; lastly, the digital signals can be shaped so that they have no dc component, and the power may then be sent along with the data. Voltage of +24, +36, and –48 V are commonly used. The basis of operations within the telephone

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.136

TERMINAL EQUIPMENT 17.136

TELECOMMUNICATIONS

is to scan input stimuli such as keys in real time, format messages, and then send them serially to the key service unit. Messages from the key service unit are demultiplexed and are used either to drive audible ringers or to flash lights. Advanced systems embed a cyclic redundancy check with each message to minimize false operations. Somewhat more complex message structures are involved where displays are to be updated. This approach minimizes the requirement for changes within the telephone to customize it to the user’s needs. For example, software in the key service unit is responsible for determining the meaning of most keys on the telephone sets. If it is changed, only the set labels are varied. The circuitry for operating the set is implemented in a number of ways. Small 4- and 8-bit microprocessors are common, although the smallest sets use custom VLSI for their operation. Wiring is reduced to as few as two pairs, although up to four pairs are also used where a second voice path is required. Depending on the speed of transmission, twisted wire pairs without bridge taps are usually required, which makes these sets of limited use in residential locations unless the locations are rewired. Digital Key Telephones. The most technologically advanced key telephones send and receive voice signals in digitally encoded format. See Murato et al. (1986). The standard network encoding of 64 kb/s in mu-law format is most common. The telephone contains a compatible codec that converts from analog to digital formats. As with the electronic key telephones, many designs have nonstandard interfaces. The most basic digital key telephones combine the digital data stream and the encoded voice stream on two pairs. More advanced telephones add an additional channel for binary data at speeds from 9.2 to 64 kb/s. As an option, or at times built-in, this channel appears at a standard EIA RS 232 interface, which can directly interface with compatible terminals or desktop computers. The key service unit can switch this data channel independently of the voice channel. The conductor formats vary. Some systems use two pairs, each carrying the combined voice and other signals in different directions at speeds up to about 200 kb/s. Other systems use just a single pair at speeds approximating 500 kb/s. These operate by sending information to the telephone preceded by a flag bit; the telephone synchronizes to this data stream and, at its conclusion, replies with a return message. This format is defined as time-compression multiplex in transmission terminology; informally, it is referred to as “ping-pong” because the signals go back and forth constantly. In recent years, international standards bodies have defined a new network standard, ISDN (integrated services digital network). The protocols formulated for this service network are broad enough to embrace the needs of digital key telephones. The implementation costs and flexibility of this new standard have become low enough that these protocols are now appearing in digital key telephone sets.

PC-Emulation of Key Telephone Sets With the increasing number of desktops that have both telephones and PCs, the two are being integrated in a number of forms. In the simplest arrangement, logical connections are made between the PC and the telephone. Graphics software recreates the telephone on the face of the screen, allowing keyboard commands or mouse-andclick operations to replace buttons. Graphic displays replace the illumination of traditional telephones. A more advanced configuration builds circuitry into the PC, eliminating the requirement for a physical instrument entirely, if the PC has audio capability. User benefits include simpler operation for complex features, built-in help screens, and better integration with data bases. For example, if the PC is able to receive the incoming telephone number of the calling party, the PC screen can automatically display information about the incoming caller.

Speakerphone A speakerphone is basically the transmission portion of a telephone set in which the handset has been replaced by a microphone and loudspeaker, typically located within a few feet of the user. This arrangement provides the user freedom to move about with hands free during a telephone conversation, as well as some reduction in fatigue on long calls. It also facilitates small-group participation on a telephone call and can be of benefit in special cases involving hearing loss and physical handicaps. The lengthened transmitting and receiving acoustical paths in the speakerphone arrangement, compared with those of a conventional telephone, introduce loss, which can be on the order of 20 dB or more in each path. This requires that gain be added to both the transmit and receive channels over that provided in a conventional telephone. The amount of gain that can be added in each channel to compensate for the loss in the acoustical paths is limited by two problems. A “singing” problem may occur if an acoustic signal picked up by the microphone is Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.137

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.137

FIGURE 17.7.14 Block diagram of a voice-switched speakerphone. (Copyright American Telephone and Telegraph Co. Used by permission)

fed back to the loudspeaker via the sidetone path and returns to the microphone through acoustic coupling in the room. Singing can occur when too much gain is added in this loop. Even before reaching this condition, however, the return of room echoes to the distant talker can become highly objectionable. Echoes occur when coupling from loudspeaker to microphone causes the incoming speech to be returned to the distant party with delay. A solution to these problems can be found in voice switching, in which only one direction of transmission is fully active at a time. With voice switching, a switched-loss or switched-gain element is provided in both the transmit and receive channels, which operate in a complementary fashion. In this manner, full gain is realized in the chosen direction of transmission, while margin is provided against singing and distant talker echo. Voice switching, however, results in one-way-at-a-time communication. Also, there can be a problem of clipping of a portion of the speech, since control of the voice-switching operation is derived from the speech energy itself. A functional diagram of the essential elements for a voice-switched speakerphone is shown in Fig. 17.7.14, in which a measure of the speech energy is provided to the control circuit from four distinct locations in the transmission paths. Signals VTI and VRI that are a measure of the speech energy in the transmit and receive paths, respectively, are compared to determine the direction of transmission to be enabled. Signal VTZ is used by the control circuit to guard against switching falsely into receive because of transmit energy arriving in the receive path through the hybrid sidetone circuit. Similarly, the control circuit uses VR2 to guard against switching falsely into transmit because of receive energy arriving in the transmit path from acoustic coupling. Many speakerphone designs do not use all four control signals directly, but equivalent functions are generally provided by other means. Another solution to echo control and reduction that enables full duplex performance of speaker-phones is acoustic echo cancellation (AEC). With this technique, the signal driving the speakerphone’s loudspeaker is compared to the signal generated by the microphone. The speakerphone builds an acoustic model to remove the acoustic echo from the microphone signal before it is transmitted over the telephone network. The use of AEC in speakerphones has been enabled by the availability of lower cost, high function digital signal processors (DSPs), and advances in adaptive signal processing needed to handle dynamic changes of the acoustic environment as people move about and room conditions change. Once the AEC is adapted, the need for switched loss is diminished and the speakerphone can attain a full-open, or full duplex, condition. This allows for very natural and fluid voice communications. An AEC is typically placed between the control circuits of a speakerphone and the signals to and from its transducers. This effectively eliminates echoes before they pass the control circuit. Typically there is a signaling connection between the AEC and the control circuit to communicate the status of the AEC. This allows the control circuit to switch less loss when the AEC has adapted, and to prevent the AEC from adapting during periods of double-talking. Hybrid echo cancellers (see Sondhi and Presti, 1966) is typically placed between the hybrid and the control circuit of the speakerphone. While voice switching and acoustic echo cancellation is effective in eliminating the problems of singing and distant-talker echo, it does not relieve the higher transmitted levels of room ambient noise and reverberant (barrel effect) speech caused by increased gain in the transmit channel. Both of these problems can be reduced by one or more of the following methods: (1) install the speakerphone in a quiet, nonreverberant room; (2) reduce the transmit gain and the actual talker-to-microphone distance; (3) reduce the transmit gain and the Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.138

TERMINAL EQUIPMENT 17.138

TELECOMMUNICATIONS

effective talker-to-microphone distance with the use of a directional microphone; and (4) roll off the lowfrequency end of the transmit response slightly to reduce the pickup of low-frequency noise and reverberant room modes, at the expense of a slight reduction in voice naturalness.

DATA TERMINALS W. J. Lawless Data Terminal Equipment (DTE) Data terminals are used in a variety of data communication systems. Terminals can be classified into two categories: general-purpose terminals and special-purpose terminals. General-purpose terminals contain many elements found in a modern day personal computer (PC) and thus it is very common to use a PC to emulate a data terminal. In some applications, where cost is particularly important, a teletypewriter data terminal containing only the basic terminal elements is used as a general-purpose data terminal. Special-purpose data terminals have been designed for a number of applications. In some cases the functionality is limited as in the case of a send-only or a receive-only terminal. Likewise, special purpose terminals are used in applications requiring special functionality such as in handheld applications and point-of-sale applications. In either case the data terminal is used to communicate messages. Some common applications include inquiry-response, data collection, record update, remote batch, and message switching. Data terminals are typically made up of combinations of the following modules: keyboard, CRT display, printer, storage device, and controller. These modules can be organized into the categories of operator inputoutput, terminal control, and storage. Keyboards are available in a number of formats. In some cases, two formats are provided on the same keyboard, side by side or integrated, e.g., typewriter with a numeric pad. Future designs call for multifunction keyboards whose designations or functions can be easily changed by the user, because in many cases differently trained operators will be using the same terminal to enter different types of data for different applications. There is also a trend toward special-application keyboards, such as the cash register keyboard used by a fast food chain in which the name of each item on the menu appears on a proximity-switch plastic overlay. CRT displays allow the operator to enter, view, and edit information; editing information with a CRT display is typically much faster than if a mechanical printer were used. The most popular CRT displays exhibit 24 lines of up to 80 characters each. Other size variations include 12 lines of up to 80 characters and a full page display (approximately 66 lines of up to 80 characters). Other features found on CRT displays include blinking, half-intensity, upper- and lowercase character sets, foreign-character sets, variable character sizes, graphic (line-drawing) character sets, and multicolor displays. Printers for data terminals generally are classified into three types—dot-matrix, inkjet, and laser printers. Dot-matrix printers are lowest in cost but are also lowest in print speed and print quality. Laser printers are highest in cost but provide the highest print speed and quality. Inkjet devices are intermediate in cost, speed, and quality. Storage devices for data terminals include electronic (RAM, ROM), magnetic (floppy disk, hard disk), and optical (CD-ROM). Optical storage is emerging as the most popular medium for storage and retrieval of large amounts of information, particularly in applications such as online encyclopedias, image retrieval, and large databases. Controllers interconnect the channel interface and the various terminal components and make them interact to perform the terminals’ specific functions. They also perform other functions, such as recognizing and acting on protocol commands, e.g., polling and selecting sequences in selective calling systems; code translation; and error detection and correction. Controller designs typically use microprocessors to increase their functions, including programmability, which greatly increases terminal versatility. Standardized codes, protocols, and interfaces provide a uniform framework for the transmission and reception of data by data terminals. Codes and some character-oriented protocols and interfaces are discussed in Chap. 1.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.139

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.139

IBM’s binary synchronous communication (bisynch) protocol has been implemented widely. It accommodates half-duplex transmission and is character-oriented. Bit-oriented protocols that provide both half- and fullduplex transmission are also in widespread use. The American National Standards Institute’s (ANSI) Advanced Data Communications Control Procedures (ADCCP), the International Organization for Standardization’s High Level Data Link Control (HDLC), and Synchronous Data Link Control (SDLC) are the current standards for bit-oriented protocols. Although there are differences, they are generally compatible, and the trend is for other bit-oriented protocols to establish compatibility with them.

Display Terminals (Text and Graphics) Terminals for the entry and electronic display of textual and or graphic information generally are used for communicating between people and computers and, increasingly, between people. Typical application areas include computer program preparation, data entry, inquiryresponse, inventory control, word processing, financial transactions, computer-aided design, and electronic mail. A terminal is divided functionally into control, display, user input, and communication-line-interface portions (see Fig. 17.7.15). The first three portions are detailed in the following sections. Interface to a communication line is typically through either an analog or digital modem. Terminals have differing amounts of control or decision-making logic built into them. A popular classification scheme is based on terminal control capabilities. A nonintelligent (“dumb”) terminal is a basic I/O and control device. Although it may have some information FIGURE 17.7.15 Display-terminal organization. buffering, it must rely on the computer to which it is connected for most processing of any external information and editing of output displays. A “smart” terminal has both information entry and editing capabilities, and although it too is generally connected to a computer, it can perform information processing locally. The terminal usually contains a microcomputer that is programmed by the terminal manufacturer to meet the general and special needs of a user. An “intelligent” terminal has a microcomputer that can be programmed by a user to meet user-specific needs (as mentioned earlier, it is very common to use a PC as a smart or intelligent terminal). Terminals can be connected to a supporting computer either directly or through a cluster controller that supervises and services the data-communication needs of a number of terminals. The most common terminal display device is a cathode-ray tube (CRT) similar to that used in a home TV receiver. In a home TV receiver, the refreshing is done by the broadcast station, which sends 30 complete images per second. In a terminal, the refreshing must be provided by the control logic from information stored electronically in the terminal. This storage may be in a separate electronic memory or memory that is part of the microcomputer. In either case the microcomputer can address, enter, or change the information. Flat panel displays are an alternative to the CRT. Since these devices require much less space and power, they are ideal for laptop and notebook terminals and computers. The most common technology is liquidcrystal displays (LCD). For LCD, liquid-crystal molecules in most displays align themselves parallel to an electric field and lie flat when no field is present. In another type of display, the crystals tilt in different directions in response to a field. Depending on the device construction and the type of liquid crystals used, some of those orientations allow light to pass; others block it out. The result is either a dark image on a light background or the reverse. Most display terminals have a typewriterlike keyboard and a few extra buttons as input devices for the entry of textual and control information. Text is entered at a position on the display indicated by a special symbol called a cursor. The control moves this cursor along much as a typewriter head moves along as text is entered via the keyboard. Different manufacturers use different cursors (underlines, blinking squares, video-inverted characters).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.140

TERMINAL EQUIPMENT 17.140

TELECOMMUNICATIONS

To allow editing of existing images or flexible entry of additions, auxiliary control devices are provided to change the cursor position. The simplest is a set of five buttons, four of which move the cursor up, down, right, or left one character position from the current position. The fifth indicates that the cursor is to be “homed” to the upper left-hand corner of the image, where textual entry usually starts. Another popular position-controlling device is a joystick, a lever mounted on gimbals. Movement of the top of the lever is measured by potentiometers attached to the gimbals. The control senses the potentiometer outputs and moves the cursor correspondingly. Another form of control for the cursor is the mouse. Movement of the mouse on the desktop provides a direct relationship with the movement of the cursor on the screen. A precision mouse will typically use either a magnetic or optical tablet that maps the position of the mouse on the tablet to a particular spot on the screen. A lessaccurate form of the mouse uses friction between the desk surface and a ball contact on the bottom of the mouse to indicate the relative motion of the cursor. This form of mouse typically uses two optically sensed shafts mounted perpendicular to each other to detect movement of the ball contact. A mouse can have one to three buttons on top. Additional input methods include track balls, pressure-sensitive pads, and even eye-position trackers. These find utility in specific areas such as artwork generation, computer image processing, industrial applications, training, and games. Terminals intended for display of complex graphic information, e.g., for computer-aided design, generally have either tablet-stylus devices, light pens, or mice for input of new information or indication of existing information. One popular tablet-stylus has a surface area under which magneto-strictive waves are alternately propagated horizontally and vertically. A stylus with a coil pickup in the tip senses the passing of a wave under the stylus position. Electronic circuitry measures the time between launching of a wave and its sensing by the stylus and computes the position of the stylus from that time and the known velocity of the wave. A light pen senses when light is within its field of view. In a raster-scan display, the time from the start of a displayed image until the light-pen signal is received indicates where the pen is in the raster-scan pattern and, with suitable scaling factors, gives the position of the pen over the image. In a directed-beam display, penposition locating is more complicated. The centering of a special tracking pattern under the pen is sensed, and through a feedback arrangement controlled by the terminal computer the pattern is moved to keep it centered. The position of the pattern is then the same as the pen position. Multimedia terminals are now becoming more commonplace. These terminals combine text, image, and voice capabilities. In the office environment, tighter coupling between data, e-mail, fax, and voice will take place. Similarly, with more of the workforce working at home, either full-time or part-time, multimedia services in the home will be required.

Data Transmission on Analog Circuits The devices used for DTE generate digital signals. These signals are not compatible with the voice circuits of the public telephone network, partly because of frequency range but more importantly because these circuits are likely to produce a small frequency offset in the transmitted signal. The offset causes drastic changes in the received waveform. This effect, while quite tolerable in voice communications, destroys the integrity of individual pulses of a baseband signal. Compatible transmission is obtained by modulating a carrier frequency within the channel passband by the baseband signal in a modem (modulator-demodulator). For a band-limited system, Nyquist showed that the maximum rate for sending noninterfering pulses is two pulses (usually called symbols) per second per hertz of bandwidth. The bit rate depends on how these pulses are encoded. For example, a two-level system transmits 1 b with each pulse. A four-level system transmits 2 b per pulse, an eight-level system transmits 3 b, and so forth. Unfortunately, we cannot go to an arbitrarily large number of levels because, assuming that the total power is limited, the levels will become so closely spaced that the random disturbances in the transmission medium will make one level indistinguishable from the next. Shannon’s fundamental result states that there is a maximum rate, called the channel capacity, up to which one can send information reliably over a given channel. This capacity is determined by the random disturbances on the channel. If these random disturbances can be characterized as white Gaussian noise, the channel capacity C is given by C = W log2 (1 + S/N ) where W is the bandwidth of the channel and S and N are the average signal and noise powers, respectively.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.141

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.141

With today’s more elaborate commercial modulation and coding techniques it is possible to transmit data at approximately 70 percent of the capacity of the channel. With the most complex modulation and coding techniques, it is possible to get very close to the capacity of the channel, but this is achieved at the expense of considerable processing complexity and delay. Data transmission employs all three modulation methods (amplitude, frequency, and phase) plus combinations of them. A description of these methods is given in Sec. 12. An on-off amplitude-modulated (AM) signal is shown in Fig. 17.7.16a. This modulation scheme is little used for voice-band modems because of its inferior performance compared to other modulation schemes. An example of a binary frequency-modulated (FM) carFIGURE 17.7.16 Binary (a) amplitude-, (b) frequency-, rier wave, sometimes called frequency-shift keying (FSK), and (c) phase-modulated carrier waves. is shown in Fig. 17.7.16b). While FM or FSK requires somewhat greater bandwidth than AM for the same symbol rate, it gives much better performance in the presence of impulse noise and gain change. It is used extensively in low- and medium-speed (voice-band) telegraph and data systems. A binary phase-modulated (PM) carrier wave is shown in Fig. 17.7.16c, where a phase change of 180° is depicted. However, this modulation method is usually employed in either four-phase or eight-phase systems. In a four-phase-change system, the binary bits are formed in pairs, called dibits. The dibits determine the phase change from one signal element to the next. The four phases are spaced 90° apart. In effect, this method employs a four-state signal, and such a system is inherently capable of greater transmission speeds for the same bandwidth, as is obvious from the Nyquist-Shannon criteria stated above. With improvement of voice channels, the four-phase (quaternary-phase-modulated) scheme is being employed increasingly for medium-speed (voice-band) data transmission systems to give higher transmission speeds than FM for the same bandwidth. The system is useful only in synchronous transmission. In present-day modems FSK is the preferred modulation technique for bit rates below 1800 b/s. At 2400 b/s, the commonly used modulation technique is PSK using four phases, and at 4800 b/s it is PSK using eight phases. The latter requires the use of an adaptive equalizer, an automatically adjustable filter that minimizes the distortion of the transmitted pulses resulting from the imperfect amplitude and phase characteristics of the channel. At 9600 b/s, the preferred modulation technique is quadrature amplitude modulation, a combination of amplitude and phase modulation. Above 9600 b/s (e.g., 19,200 and 28,800 b/s) the preferred modulation technique is quadrature amplitude modulation combined with a coding technique called trellis-coded modulation (TCM). TCM provides much improved performance at the expense of more complex implementation. CCITT Recommendation V.34 specifies this type of modulation for 19.2 kb/s.

Data Transmission on Digital Circuits The techniques outlined in the previous section apply for transmission over the analog telephone network, sometimes referred to as public switched telephone network (PSTN). While local access to PSTN is over analog facilities (typically, twisted copper wires), the interexchange and long-distance telephone network consists principally of digital facilities. Analog signals are converted via pulse-code-modulation into digital signals for transmission over these digital facilities. If digital access to the long-distance digital facilities is provided, then end-to-end digital transmission is possible. Data signals then need not go through the digital to analog conversion outlined in the previous section, but can remain digital end-to-end (from source to destination). An early example of an end-to-end digital system was AT&T’s Digital Data System, which was deployed during the mid-1970s. Here, digital access lines together with Central Office digital multiplexers enabled transmission at rates of 2.4, 4.8, 9.6, and 56 kb/s.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.142

TERMINAL EQUIPMENT 17.142

TELECOMMUNICATIONS

Switched network digital systems are provided today via ISDN, which provide circuits at 64 and 128 kb/s. Higher bandwidth circuits and services are now being offered by various entities. Modern systems use packet-switching techniques rather than circuit-switching to enable very efficient use of the digital transmission facilities. These packet-switched systems typically use frame relay and asynchronous transfer mode (ATM) protocols. In addition to the advances in the long-distance marketplace, changes are also taking place in the local access area. Regional Bell operating companies as well as alternate access providers are beginning to deploy local distribution systems using optical fiber and coax cable combinations. While the initial thrust for these systems is to provide multichannel video to the home, the broadband facilities also provide an excellent vehicle for broadband multimedia data services to the home. Local Area Networks As with wide area networks, a local network is a communication network that interconnects a variety of devices and provides a means for information exchange among those devices. See Stallings and Van Slyke (1994). There are several key distinctions between local networks and wide area networks: 1. The scope of the local network is small, typically a single building or a cluster of buildings. This difference in geographic scope leads to different technical solutions. 2. It is usually the case that the local network is owned by the same organization that owns the attached devices. 3. The internal data rates of local networks are much greater than those of wide area networks. The key elements of a local area network are the following:

• Topology: bus or ring • Transmission medium: twisted pair, coaxial cable, or optical fiber

TABLE 17.7.1

LAN Technology Elements

Element

Options

Restrictions

Topology

Bus Ring

Not with optical fiber Not CSMA/CS or broadband

Transmission medium

Unshielded twisted pair



Shielded twisted pair Baseband coaxial cable Broadband coaxial cable

— — Not with ring

Optical fiber

Not with bus

Layout

Linear Star

— Best limited to twisted pair

Medium access control

CSMA/CD

Bus, not good for broadband or optical fiber Bus or ring, best for broadband

Token passing

Comments No active elements Supports fiber, high availability with star wiring Inexpensive; prewired; noise vulnerability Relatively inexpensive — High capacity; multiple channels; rugged Very high capacity; security Minimal cable Ease of wiring; availability Simple

High throughout, deterministic

Source: Stallings, W. and Van Slyke, R. “Business Data Communications,” Macmillan College Publishing Company, 1994.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.143

TERMINAL EQUIPMENT TERMINAL EQUIPMENT

17.143

• Layout: linear or star • Medium access control: CSMA/CD or token passing Together, these elements determine not only the cost and capability of the LAN but also the type of data that may be transmitted, the speed and efficiency of communications, and even the kinds of applications that can be supported. Table 17.7.1 provides an overview of these elements.

Data Communication Equipment (DCE) Trends Most modern businesses of significant size depend heavily on their data-communications networks. Large businesses usually have a staff of specialists whose job it is to manage the network. To assist in this networkmanagement function, DCE manufacturers have provided various testing capabilities in their products. Sophisticated DCEs now are capable of automatically monitoring their own “health” and reporting it to the centralized location where the network management staff resides. These new capabilities are frequently implemented through the use of microprocessors. Some DCEs can also establish whether a trouble is in the modem itself or the interconnecting channel. If it is the channel that is in trouble, equipment is available that automatically sets up dialed connections to be used as backup for the original channel. Another capability is to send in a trouble report automatically when a malfunction is detected.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.144

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 17.8

MEMS FOR COMMUNICATION SYSTEMS D. J. Young

MEMS FOR WIRELESS COMMUNICATIONS Introduction The increasing demand for wireless communication applications, such as cellular telephony, cordless phone, and wireless date networks, motivates a growing interest in building miniaturized wireless transceivers with multistandard capabilities. Such transceivers will greatly enhance the convenience and accessibility of various wireless services independent of geographic location. Miniaturizing current single-standard transceivers, through a high-level of integration, is a oritical step toward building transceivers that are compatible with multiple standards. Highly integrated transceivers will also result in reduced package complexity, power consumption, and cost. At present, most radio transceivers rely on a large number of discrete frequency-selection components, such as radio-frequency (RF) and intermediate-frequency (IF) band-pass filters, RF voltage-controlled oscillators (VCOs), quartz crystal oscillators, and solid-state switches, to perform the necessary analog signal processing. These off-chip devices severely hinder transceiver miniaturization. MEMS technology, however, offers a potential solution to integrate these discrete components onto silicon substrates with microelectronics, achieving a size reduction of a few orders of magnitude. It is therefore expected to become an enabling technology to ultimately miniaturize radio transceivers for future wireless communications.

MEMS Variable Capacitors Integrated high-performance variable capacitors are critical for low noise VCOs, antenna tuning, tunable matching networks, and so on. Capacitors with high quality factor (Q), large tuning range, and linear characteristics are crucial for achieving system performance requirements. On-chip silicon pn junction and MOS-based variable capacitors suffer from low quality factors, limited tuning range, and poor linearity, and are thus inadequate for building high-performance transceivers. MEMS technology has demonstrated monolithic variable capacitors achieving stringent performance requirements. These devices typically reply on an electrostatic actuation method to vary the air gap between a set of parallel plates or vary the capacitance area between a set of conductors or mechanically displace a dielectric layer in an air-gap capacitor. Improved tuning ranges have been achieved with various device configurations. Capacitors fabricated through using metal and metalized silicon materials have demonstrated superior quality factors compared to solid-state semiconductor counterparts. Besides the above advantages, micromachined variable

17.144 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.145

MEMS FOR COMMUNICATION SYSTEMS MEMS FOR COMMUNICATION SYSTEMS

17.145

FIGURE 17.18.1 Micromachined RF switch: (a) switch up; (b) switch down.

capacitors suffer from a reduced speed, potentially a large tuning voltage, and mechanical thermal vibration commonly referred to as Brownian motion, which deserves great attention when used to implement low phase noise VCOs.

MEMS Switches The microelectromechanical switch is another potentially attractive miniaturized component offered by micromachining technologies. These switches offer superior electrical performance in terms of insertion loss, isolation, linearity, and so on and are intended to replace off-chip solid-state counterparts switching between the receiver and transmitter signal paths. They are also critical for building phase shifters, tunable antennas, and filters. The MEMS switches can be characterized into two categories: capacitive and metal-to-metal contact types. Figure 17.8.1 presents the cross-sectional schematic of an RF MEMS switch. The device consists of a conductive membrane, typically made of aluminum or gold alloy suspended above a coplanar electrode by a few micrometers air gap. For RF or microwave applications, actual metal-to-metal contact is not necessary; rather, a step change in the plate-to-plate capacitance realizes the switching function. A thin silicon nitride layer with a thickness on the order of 1000 Å is typically deposited above the bottom electrode. When the switch is in on-state, the membrane is high, resulting in a small plate-to-plate capacitance; hence, a minimum high-frequency signal coupling (high isolation) between the two electrodes. The switch in the off-state with a large enough applied dc voltage, however, provides a large capacitance owing to the thin dielectric layer, thus causing a strong signal coupling (low insertion loss). The capacitive switch consumes near-zero power dissipation, attractive for low power portable applications. Superior linearity performance has also been demonstrated because of the electromechanical behavior of the device. Metal-to-metal contact switches are important for interfacing large bandwidth signals including dc. This type of device typically consists of a cantilever beam or clamped-clamped bridge with a metallic contact pad positioned at the beam tip or underneath bridge center. Through an electrostatic actuation, a contact can be formed between the suspended contact pad and an electrode on the substrate underneath. High performance on a par with the capacitive counterparts has been demonstrated. Microelectromechanical switches, either capacitive or metal contact versions, exhibit certain drawbacks including low switching speed, high actuation voltage, sticking phenomena due to dielectric charging, metal-to-metal contact welding, and so on, thus limiting device lift time and power handling capability. Device packaging with inert atmosphere (nitrogen, argon, and so on) and low humidity is also required.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.146

MEMS FOR COMMUNICATION SYSTEMS 17.146

TELECOMMUNICATIONS

MEMS Resonators Microelectromechanical resonators based on polycrystalline silicon comb-drive fingers, suspended beams, and center-pivoted dick configurations have been proposed for performing analog signal processing. These microresonators can be excited into mechanical resonance through an electrostatic drive. The mechanical motion will cause a device capacitance change resulting in an output electrical current when a proper dc bias voltage is applied. This output current exhibits the same frequency as the mechanical resonance, thus achieving an electrical filtering function through the electromechanical coupling. The resonators can obtain high-quality factors close to 10,000 in vacuum with operating frequencies above 150 MHz reported in literatures and size reduction by a few orders of magnitude compared to discrete counterparts. These devices with demonstrated performance are attractive for potentially implementing low-loss IF band-pass filters for wireless transceivers design. Future research effort is needed to increase the device operating frequency up to gigahertz (GHz) range. As with other MEMS devices, the micromachined resonators also have certain drawbacks. For example, vacuum packaging is required to achieve a high-quality factor for building low loss filters. The devices may also suffer from a limited dynamic range and power-handling capability. The mechanical resonant frequency is strongly dependent on the structure dimensions and material characteristics. Thus, a reliable tuning method is needed to overcome the process variation effect and inherent temperature sensitivity.

Micromachined Inductors Integrated inductors with high-quality factors are the key components for implementing low noise oscillators, low loss matching networks, and so forth. Conventional on-chip spiral inductors suffer from limited quality factors around 5 at 1 GHz, an order of magnitude lower than the required values from discrete counterparts. The poor performance is mainly caused by substrate loss and metal resistive loss at high frequencies. Micromachining technology provides an attractive solution to minimize these loss contributions; hence enhancing the device quality factors. Q factors around 30 have been achieved at 1 GHz matching the discrete component performance. Threedimensional coil inductors have been fabricated on silicon substrates by micromachining techniques. Levitated spiral inductors have also been demonstrated. All these devices exhibit three common characteristics: (1) minimizing device capacitive coupling to the substrate, (2) reducing winding resistive loss through employing highly conductive materials, and (3) nonmovable structures upon fabrication completion.

MEMS FOR OPTICAL COMMUNICATIONS Introduction High-speed communication infrastructures are desirable for transferring and processing real-time information such as voice and video. Optical fiber communication technology has been identified as the critical backbone to support such systems. High-performance optical data switching network, which routes various optical signals from their sources to destinations, is one of the key building blocks for system implementation. At present, optical signal switching is performed by using hybrid optical-electronic-optical (O-E-O) switches. These devices convert incoming lights from input fibers to electrical signals first and then route them to the proper output ports after signal analyses. At the output ports, the electrical signals are converted back to streams of photons or optical signals for further transmission over the fibers to their next destinations. The O-E-O switches are expensive to build, integrate, and maintain. Furthermore, they consume a substantial amount of power and introduce additional latency. It is therefore highly desirable to develop all-optical switching network in which optical signals can be routed without intermediate conversion into electrical form, thus minimizing power dissipation and system delay. While a number of approaches are being considered for building all-optical switches, MEMS technology is attractive for providing arrays of tiny movable mirrors, which can redirect incoming beams from input fibers to corresponding output fibers. These micromirrors can be batch fabricated using silicon micromachining technologies, achieving a low-cost integrated solution. A significant reduction in power dissipation is also expected.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_17.qxd

10/28/04

11:10 AM

Page 17.147

MEMS FOR COMMUNICATION SYSTEMS MEMS FOR COMMUNICATION SYSTEMS

17.147

MEMS Mirrors Various micromachined mirrors have been developed over the years. They can be typically divided into two categories: (1) out-of-plane mirrors and (2) in-plane mirrors. The out-of-plane mirrors are usually fabricated by polycrystalline silicon surface micromachining techniques. After sacrificial release, the mirror structures can be folded out of the substrate and position secured by silicon hinges. The mirror surface can be moved by an electrostatic vibromotor, comb-drive fingers, and other electrostatic means. These mirrors can achieve one degree of freedom and thus are attractive for routing optical signals in a two-dimensional switching matrix and also for raster-scanning display applications. The in-plane mirrors are typically fabricated using a thick single crystal silicon layer on the order of a few tens micrometers from a SOI wafer by deep RIE and micromachining techniques. The thick structural layer minimizes mirror warping, critical for high-performance optical communication applications. Self-assembly technique relying on deposited film stress has also been employed to realize lifted-up micromirror structures. The mirror position can be modulated by a vertical actuation of combdrive fingers and electrostatic pads or a lateral push-pull force. Micromirrors with two degrees of freedom have also been demonstrated by similar techniques. These mirrors with an analog actuation and control scheme are capable of directing optical beams to any desired position, and are thus useful for implementing large threedimensional optical switching arrays to establish connections between any set of fibers in the network.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 18

DIGITAL COMPUTER SYSTEMS Murray J. Haims, Stephen C. Choolfaian, Daniel Rosich, Richard E. Matick, William C. McGee, Benton D. Moldow, Robert A. Myers, George C. Stierhoff, Claude E. Walston

No other invention in the twentieth century has impacted virtually every aspect of our life like the digital computer. The hardware along with software have forever changed how we work with everything from the automobile to our checking accounts. It goes to the deepest parts of our oceans and planet to well beyond our solar system. In spite of how pervasive and sophisticated they are, they remain fairly simple and straight forward devices (although anyone who works with them regularly could easily argue with us on this point). An understanding of the material in this section should allow us to work with many hardware and software challenges. Computers are organized into basic activities. This is referred to as the architecture of the computer as well as the software. The first part to look at is data processing. Data are merely the representation of something in the real world such as money by their binary equivalent. After we look at the rest of the architecture of the computer we will look at how these can be varied in the design process to produce different types of computer activities. Next we look at how software can control the interactions of the hardware to produce the desired results. Keep in mind that even though there are a number of different software packages that we use, they all do essentially the same things and all that is different is really just the syntax of the specific application. Future trends in this field will essentially focus on two areas. The first will be in the area of how we interact with computer and input and output data. We already have the ability to use handwritten interaction and voice interaction. It is in the area of verbal interaction with the computer where we will most likely see the end of the keyboard as we know it. Current voice interactive software can achieve accuracies of 97 percent with training. This compares favorably with keyboard input and is decidedly faster for everyone but the most skilled typist. The other area will be in the basic architecture of both the hardware and software. Increasingly we are developing portions of the computer into parallel processors that will enable a major shift in the way software will be developed. Eventually we will have computers that come from a manufacturer with a defined hardware and software architecture. Once the computer is turned on it will begin reconfiguring its hardware and software to adapt to the needs of the user. C.A.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.2

DIGITAL COMPUTER SYSTEMS

In This Section: CHAPTER 18.1 COMPUTER ORGANIZATION PRINCIPLES OF DATA PROCESSING NUMBER SYSTEMS, ARITHMETIC, AND CODES COMPUTER ORGANIZATION AND ARCHITECTURE HARDWARE DESIGN

18.5 18.5 18.12 18.19 18.43

CHAPTER 18.2 COMPUTER STORAGE BASIC CONCEPTS STORAGE-SYSTEM PARAMETERS FUNDAMENTAL SYSTEM REQUIREMENTS FOR STORAGE AND RETRIEVAL RANDOM-ACCESS MEMORY CELLS STATIC CELLS DYNAMIC CELLS RANDOM-ACCESS MEMORY ORGANIZATION DIGITAL MAGNETIC RECORDING MAGNETIC TAPE DIRECT-ACCESS STORAGE SYSTEMS—DISCS VIRTUAL-MEMORY SYSTEMS MAPPING FUNCTION AND ADDRESS TRANSLATION

18.44 18.44 18.45 18.46 18.46 18.47 18.49 18.50 18.51 18.55 18.56 18.56 18.58

CHAPTER 18.3 INPUT/OUTPUT INPUT-OUTPUT EQUIPMENT I/O CONFIGURATIONS I/O MEMORY–CHANNEL METHODS TERMINAL SYSTEMS PROCESS-CONTROL ENTRY DEVICES MAGNETIC-INK CHARACTER-RECOGNITION EQUIPMENT OPTICAL SCANNING BATCH-PROCESSING ENTRY PRINTERS IMPACT PRINTING TECHNOLOGIES NONIMPACT PRINTERS IMAGE FORMATION INK JETS VISUAL-DISPLAY DEVICES

18.62 18.62 18.62 18.63 18.63 18.64 18.64 18.64 18.65 18.65 18.65 18.67 18.67 18.68 18.69

CHAPTER 18.4 SOFTWARE NATURE OF THE PROBLEM THE SOFTWARE LIFE-CYCLE PROCESS PROGRAMMING ALTERNATION AND ITERATION FLOWCHARTS ASSEMBLY LANGUAGES HIGH-LEVEL PROGRAMMING LANGUAGES HIGH-LEVEL PROCEDURAL LANGUAGES FORTRAN BASIC APL PASCAL ADA PROGRAMMING LANGUAGE C PROGRAMMING LANGUAGE OBJECT-ORIENTED PROGRAMMING LANGUAGES

18.71 18.71 18.71 18.72 18.72 18.73 18.75 18.77 18.77 18.78 18.78 18.78 18.79 18.79 18.79 18.79

18.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.3

DIGITAL COMPUTER SYSTEMS

COBOL AND RPG OPERATING SYSTEMS GENERAL ORGANIZATION OF AN OPERATING SYSTEM TYPES OF OPERATING SYSTEMS TASK-MANAGEMENT FUNCTION DATA MANAGEMENT OPERATING SYSTEM SECURITY SOFTWARE-DEVELOPMENT SUPPORT REQUIREMENTS AND SPECIFICATIONS SOFTWARE DESIGN TESTING EXPERT SYSTEMS

18.80 18.80 18.80 18.81 18.81 18.82 18.82 18.82 18.82 18.82 18.84 18.84

CHAPTER 18.5 DATABASE TECHNOLOGY DATABASE OVERVIEW HIERARCHIC DATA STRUCTURES NETWORK DATA STRUCTURES RELATIONAL DATA STRUCTURES SEMANTIC DATA STRUCTURES DATA DEFINITION AND DATA-DEFINITION LANGUAGES REPORT PROGRAM GENERATORS PROGRAM ISOLATION AUTHORIZATION

18.85 18.85 18.86 18.86 18.87 18.87 18.88 18.88 18.89 18.89

CHAPTER 18.6 ADVANCED COMPUTER TECHNOLOGY BACKGROUND TERMINALS HOSTS COMMUNICATIONS SYSTEMS OSI REFERENCE MODEL REAL SYSTEMS PACKET SWITCH

18.90 18.90 18.90 18.91 18.91 18.92 18.96 18.96

Section Bibliography: Aiken, A. H., and G. M. Hopper “The automatic sequence controlled calculator,” Elec. Eng., 1948, Vol. 65, p. 384. Babbage, C. “Passages from the Life of a Philosopher,” Longmans, 1864. Babbage, H. P. “Babbage’s Calculating Engines,” Spon, 1889. Bjorner, D., E. F. Codd, K. L. Deckert, and I. L. Traiger “The GAMMA-O n-ary relational data base interface specification of objects and operations,” Research Report RJ1200, IBM Research Division, 1973. Black, V. D. “Data Communications and Distributed Networks,” 2nd ed., Prentice Hall, 1987. Boole, G. “The Mathematical Analysis of Logic,” 1847, reprinted Blackwell, 1951. Brainerd, J. G., and T. K. Sharpless “The ENIAC,” Elec. Eng., February 1948, pp. 163–172. Brooks, F. P., Jr. “The Mythical Man-Month,” Addison-Wesley, 1975. Codd, E. F. “A data base sublanguage founded on the relational calculus,” Proc. ACM SIGFIDET Workshop on Data Description, Access, and Control, Association for Computing Machinery, 1971. Cypser, R. J. “Communications Architecture for Distributed Systems,” Addison-Wesley, 1978. Deitel, H. M. “Operating Systems,” 2nd ed., Addison-Wesley, 1990. Dijkstra, E. W. Commun. Ass. Comput. Mach., 1968, Vol. II, No. 3, p. 341. Dijkstra, E. W. “A Discipline of Programming,” Prentice Hall, 1976. Enslow, P. H., Jr. “Multiprocessor organization—a survey,” Comput. Surv., March 1977, Vol. 9, No. 1, pp. 103–129.

18.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.4

DIGITAL COMPUTER SYSTEMS

Feldman, J. M., and C. T. Retter “Computer Architecture, A Designer’s Text Based on a Generic RISC,” McGraw-Hill, 1994. Flynn, M. J. “Some computer organizations and their effectiveness,” IEEE Trans. Comput., September 1972, Vol. C-21, No. 9, pp. 948–960. Gear, C. W. “Computer Organization and Programming,” 2nd ed., McGraw-Hill, 1978. Gilmore, C. M. “Microprocessor Principles and Applications,” McGraw-Hill, 1989. Godman, J. E. “Applied Data Communication,” Wiley, 1995. Hamming, W. R. “Error detecting and error correcting codes,” Bell Syst. Tech. J., 1947, Vol. 29. Hancock, L., and M. Krieger “The C Primer,” McGraw-Hill, 1986. Hellerman, H. “Digital Computer System Principles,” 2nd ed., McGraw-Hill, 1973. Horowitz, E., and S. Sani “Fundamentals of Data Structures,” Computer Science Press, 1976. Lam, S. L. “Principles of Communication and Networking Protocols,” Computer Society Press, 1984. Mano, M. M. “Computer System Architecture,” 2nd ed., Prentice Hall, 1982. Mano, M. M. “Digital Logic and Computer Design,” Prentice Hall, 1979. Morris, D. C. “Relational Systems Development,” McGraw-Hill, 1987. O’Connor, P. J. “Digital and Microprocessor Technology,” 2nd ed., Prentice Hall, 1989. Smith, J. T. “Getting the Most from TURBO PASCAL,” McGraw-Hill, 1988. Stallings, W., and R. Van Slyke “Business Data Communications,” 2nd ed., Macmillan, 1994. Van de Goor, A. J. “Computer Design and Architecture,” Addison-Wesley, 1985. Wegner, P. “Programming Languages, Information Structures and Machine Organization,” McGraw-Hill, 1968. Whitten, J. L., L. D. Bentley, and V. M. Barlow “Systems Analysis and Design Methods,” 3rd ed., Irwin, 1994.

18.4 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.5

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 18.1

COMPUTER ORGANIZATION

PRINCIPLES OF DATA PROCESSING Memory, Processing, and Control Units The basic subsystems in a computer are the input and output sections, the store, arithmetic logic unit (processing unit), and the control section. Each unit is described in detail in this chapter. Generally, a computer operates as follows: An external device such as a disc file delivers a program and data to specific locations in the computer store. Control is then transferred to the stored program, which manipulates the data and the program itself to generate the output. These output data are delivered to a device, such as a CD-ROM/RW, DVD, printer, or display, where the information is used in accordance with the purpose of the digital manipulation. Historical Background There has been a line of development of mechanical calculator components, beginning with Babbage in the early 1800s and leading to a variety of mechanical desk and larger mechanical calculators. Another line of development has used relays as computing circuit elements. Today’s computers have benefited from these lines of development, but especially they are based on electronic components, the vacuum tube, and the transistor. The transistor, first described by Shockley, Bardeen, and Brattain in 1974, began a line of development that is today characterized by the miniaturization and low-power operation of very large-scale integration (VLSI). VLSI permits the interconnection of large numbers of computing elements by means of microscopic layered structures on a semiconductor substrate (usually silicon) or chip sometimes as small as 1/4 in. square. Since the entire arithmetic and logic circuit of a computer can be built on a single chip (microprocessor), computers incorporating VLSI are often called minicomputers or microcomputers. Binary Numbers Most transistors display random variations of their operating parameters over relatively wide limits. Similarly, passive circuit elements experience a considerable degree of variation, and noise and power-supply variations, and so forth, limit the accuracy with which quantities can be represented. As a result, the preferred method is to use each circuit in the manner of an on-off switch, and representation of quantities in a computer is thus almost always on a binary basis. Figure 18.1.1 shows the binary numbers equivalent to the decimal numbers between 1 and 10. Figure 18.1.2 shows the addition of binary 6 to binary 3 to obtain binary 9. The process of addition can be dissected into digital, logical, or boolean operations upon the binary digits, or bits (b). For example, a first step in the procedure for addition is to form the so-called EXCLUSIVE-OR addition between bits in each column. This function of two binary numbers is expressed in Fig. 18.1.3a in tabular form. This table is called a truth table. In Fig. 18.1.3b is the table used to generate the carries of a binary 18.5 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.6

COMPUTER ORGANIZATION 18.6

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.1 Decimal and binary numbers between 0 and 10.

FIGURE 18.1.2 Addition of 6 and 3 in decimal and binary.

FIGURE 18.1.3 Addition tables for decimal and binary numbers. The binary addition table (a) is called the EXCLUSIVE-OR or module-2 truth table; the carry table (b) performs the AND or intersection operation.

bit from one column to another. This latter function of two binary numbers is variously called the AND function, intersection, or product. The entries at each intersection in each table are the result of the combination of two binary numbers in the respective row and column. Figure 18.1.3 also shows the decimal addition tables. They illustrate the relative simplicity of the binary number system. The names truth table and logical function arise from the fact that such manipulations were first developed in the sentential calculus, a subsection of the calculus of logic, dealing with the truth or falsity of combinations of true or false sentences. Binary Encoding. Information in a digital processing machine is not restricted to numerical information since a different specific numeric code can be assigned to each letter of the alphabet. For example, A in the EBCDIC code (see next paragraph) is given by the binary sequence 11000010. When alphanumeric information is specified, such a code sequence represents the symbol A, but in the numeric context the same entry is the binary number equal to decimal 194. Computer Codes Alphanumeric information is stored in a computer via coded binary bits. Some of the more useful codes are: ASCII (American Standard Code for Information Interchange), a seven-level alphanumeric code comprising 32 control characters, an uppercase and lowercase alphabet, numerals, and 34 special characters (Fig. 18.1.4). This code is used in personal computers and non-IBM machines. “A” in ASCII is given by the 7-bit binary codes 1000001. BCDIC (binary-coded decimal interchange code), a six-level alphanumeric code that provides alphabetic (caps), numeric, and 28 special characters. Binary code, a representation of numbers in which the only two digits used are 0 and 1, and each position’s value is twice that of its right-hand neighbor (with the rightmost place having a value of 1). Binary-coded decimal (BCD) code, in which the first ten 4-bit (hexadecimal) codes are used for the decimal digits, and each nibble represents one decimal place. Codes A through F are never used. EBCDIC (expanded BCD interchange code), an eight-level alphanumeric code comprising control codes, an uppercase and lowercase alphabet, numerals, and special characters (Fig. 18.1.5). This code is used in IBM mainframe computers. Gray code, a binary code that does not follow the positional notation of true binary code. Only one bit changes from any Gray number to the next. Hexadecimal byte code, a two-digit hexadecimal number for each byte, with values ranging from 00 to FF. Hexadecimal code, a base-16 number code that uses the letters A, B, C, D, E, and F as one-digit numbers for 10 through 15. Hollerith code, a 12-level binary code used on punchcards to represent alphanumeric characters. Holes on the punchcard are ones, unpunched positions are zeros.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.7

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.7

FIGURE 18.1.4 American Standard Code for Information Interchange (ASCII).

Most computers are designed to work internally in binary fashion but at each juncture of input or output to translate the codes, either by programming or by hardware, so as to accept and offer decimal numeric information. Such systems are complicated, and failures in the coding and decoding system can prevent interaction with the program. Communication needs lead to a compromise between the human requirement for a decimal system and the machine requirement for binary. An example is the use of the base-16 (hexadecimal) system, which is relatively amendable to human recognition and manipulation. The binary code is broken into 4-bit groups. Each 4-bit group is represented by a decimal numeral or letter, as indicated in Fig. 18.1.6. In this case, 0011 is three, and 1010 is ten or A. Error-Correction Codes Though the circuits in modern digital systems have reached degrees of reliability undreamed of in the relatively recent past, errors can still arise. Hence it is desirable to detect and, if possible, correct such errors. It is

FIGURE 18.1.5 Hollerith/EBCDIC code chart.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.8

COMPUTER ORGANIZATION 18.8

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.6 Hexadecimal code.

possible by appropriate selection of binary codes to detect errors. For example, if a 6-b code is used, a seventh bit can be added to maintain the number of 1 bits in the group of 7 as an odd number. When any group of 7 with an even number of 1s is found by appropriate circuits in the machine, an error is detected. Such a procedure is known as parity checking. Although these error-control coding schemes were originally developed for noisy transmission channels, they are also applicable to storage devices in data-processing systems.

Boolean Functions Figure 18.1.7 illustrates truth tables for functions of one, two, and three binary variables. Each x entry in each table can be either 0 or 1. Hence for one variable x four functions f(x) can be formed; for two variables x1 and x2, 16 functions f(x1, x2) exist; for three variables, 256 functions, and so on. In general, if f(x1, . . . , xn) is a function of n binary variables, 22n such functions exist. For functions of one variable, the most important is the inverse function, defined in Fig. 18.1.8. This is – called NOT A or A, where A is the binary variable. Also illustrated in Fig 18.1.8 are the two most important functions of two binary variables, the AND ( product or intersection) and the OR (sum or union). If A and B are the two variables, the AND function is usually represented as AB and the OR as A + B. – – Figure 18.1.9 shows how the products AB, AB, A B and AB are summed to yield any function of two binary variables. Each of these products has only one 1 in the four positions in their respective truth tables, so that appropriate sums can generate any function of two binary variables. This concept can be expanded to functions of more than two variables, i.e., any function of n binary variables can be expanded into a sum of products of the variables and their negatives. This is the general theorem of boolean algebra. Such a sum is called the standard sum or the disjunctive normal form. The fact that any binary function can be so realized implies that mechanical or electrical simulations of the AND, OR, and NOT functions of binary variables can be used to represent any binary function whatever.

FIGURE 18.1.7 Binary functions of (a) one, (b) two, and (c) three binary variables.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.9

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

FIGURE 18.1.8 Significant functions of one and two binary variables: (a) the negation (NOT) function of one binary variable; (b) the OR function of two binary variables; (c) the AND function of two binary variables.

18.9

FIGURE 18.1.9 The four products of two binary variables (top). The realization of the EXCLUSIVE-OR function is shown below.

Electronic Realization of Logical Functions Logical functions can be realized using electronic circuits. Figure 18.1.10 illustrates the realization of the OR function and Fig. 18.1.11 illustrates the realization of the AND circuit using diodes. Each input lead is associated with a boolean variable, and the upper level of voltage represents the logical 1 for that variable; in the OR circuit, any input gives rise to an output. Thus for a three-variable input, the output is A + B + C. With the AND function no output is realized unless all inputs are positive; the output function generated is ABC. The inverse function (NOT) of a boolean variable cannot be readily realized with diodes. The circuit shown in Fig 18.1.12 uses the inverting property of a grounded-emitter transistor amplifier to perform the inverse function. Also shown in Fig. 18.1.12 is an example of how the OR function and the NOT function are combined to form the NOT-OR (NOR) function. In this case, since the transistor circuit provides both voltage and current gain, the signal-amplitude loss associated with transmission through the diode can be compensated, so that successive levels of logic circuits can be interconnected to form complex switching nets. Figure 18.1.13 illustrates the realization of the EXCLUSIVE-OR function. Note that the variables are represented by the wiring of interconnected circuit blocks, while the function is realized by the circuit blocks themselves. Levels of Operation in Data Processing A detailed sequence of operations is generally required in a data-processing system to realize even simple operations. For example, in carrying out addition, a machine typically performs the following sequence of operations:

FIGURE 18.1.10 Diode realization of an OR circuit. A positive input on any line produces an output.

FIGURE 18.1.11 Diode realization of an AND circuit. All inputs must be positive to produce an output.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.10

COMPUTER ORGANIZATION 18.10

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.12 Use of a transistor circuit for inverting a function. The circuit shown forms the NOT-OR (NOR) of the inputs.

1. Fetch a number from a specific location in storage. a. Decode the address of the program instruction to activate suitable memory lines. Such decoding is accomplished by activating appropriate AND and OR gates to apply voltage to the lines in storage specified by the instruction address. b. Sequence storage to withdraw the information and place it in a storage output register. c. Transmit information from the storage output register into the appropriate ALU. 2. Withdraw a number from storage and add it to the number in the ALU. These operations break down into: a. Decode the instruction address, activate storage lines, and transmit the information to the ALU input for addition. b. Form the EXCLUSIVE-OR of the second number with the number in the ALU to form the sum less the carry. Form the AND of the two numbers to develop the first-level carry. c. Form the second-level EXCLUSIVE-OR sum. d. AND the first-level carry with the first-level EXCLUSIVE-OR sum to form the second-level carry. e. Generate the third-level EXCLUSIVE-OR by forming the EXCLUSIVE-OR of the second-level carry with the second-level EXCLUSIVE-OR sum, AND the second-level carries with the second-level EXCLUSIVE-OR for the third-level carry and so forth until no more carries are generated. 3. Store the result of the addition into a specified location in storage. This sequence illustrates two basic types of operation in a data-processing machine. Operations denoted above by numbers are of specific interest to the programmer, since they are concerned with the data stored and the operations performed thereupon. The second level, denoted above by letters, are operations at the logicalcircuit level within the machine. These operations depend on the particular configurations of circuits and other hardware in the machine at hand. If only the higher-level (numbered) instructions are used, some flexibility in machine operation is lost. For example, only an add operation is possible at the higher level. At the lower-level (lettered) operations the AND or EXCLUSIVE-OR of the data words can be formed and placed in storage. FIGURE 18.1.13 Circuit realization of the EXCLUSIVEThe organization of current digital computers follows OR function. the lines of these two divisions (numbered and lettered, above). The macroinstruction set associated with each machine can be manipulated by the programmer. These instructions are usually implemented in a numerical code. For example, the instruction “load ALU” might be 01 in binary. “Add ALU” might be given by 10 and “store ALU” by 11. Similarly, each instruction has an associated storage address to provide source data. The microinstruction set comprises a series of suboperation that is combined in various sequences to realize a given macroinstruction.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.11

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.11

Two methods of realizing the sequence of suboperations specified by the operations portion of the instruction have been used in machine design. In one such method a direct decoding of the information from the instruction occurs when it is placed in an instruction address register. Specific clock sequences turn on the successively required lines that have been wired in place to realize the action sought. An alternative for actuating a subprogram is to store a number of information bits, called microinstructions, that are successively directed to the appropriate control circuits to activate selectively and sequentially individual wires to gate sequential actions for the realization of the requisite instruction. The first method of computer design is called hard-wired, and the second is the microprogrammed. The microprogram essentially specifies a sequence of operations at the individual circuit level to specify the operations performed by macroinstructions. Microprogramming is preferred in modern computer designs.

Types of Computer Systems There is a wide variety of computer-system arrangements, depending on the type of application. One type of installation is that associated with batch processing. A computer in a central job location receives programs from many different sources and runs the programs sequentially at high speed. An overall supervisory program, called an operating system, controls the sequence of programs, rejecting any that are improperly coded and completing those which are correct. Another type of system, the time-shared system, provides access to the computer from a number of remote input-output stations. The computer scans each remote station at high speed and accepts or delivers information to that location as required by the operator or by its internal program. Thus a small terminal can gain access to a large high-speed system. Still another type of installation, the microcomputer, involves an individual small computer that, though limited in power, is dedicated to the service of a single operator. Such applications vary from those associated with a small business, with limited computational requirements, to an individual engaged in scientific operations. Other computers are used for dedicated control of complex industrial processes. These are individual, onceprogrammed units that perform a real-time operation in systems control, with sensing elements that provide the inputs. Highly complex interrelated systems have been developed in which individual computers communicate with and control each other in an overall major systems network. Among the first of such systems was the SAGE network, developed in the 1950s for defense against missile or aircraft attack. Computers that are interconnected to share workload or problems are said to form a multiprocessing system. A computer system arranged so that more than program can be executed simultaneously is said to be multiprogrammed. Interactive systems allow users to communicate directly with the computer and have the computer respond. The development of these systems parallels that of the keyboard and of the video display. The systems are used commercially (e. g., airline reservations) and scientifically (users input data at their terminals or telephones and get a response to their input). The terminal, a widely used input/output device, has a keyboard and a visual display. A terminal may be dumb, which means that it has no computing power, or smart, which indicates computing capabilities, such as those provided by a personal computer. Client–server systems are interactive systems where the data are at a remote computer called a server.

Internal Organization of Digital Computers The internal organization of a data-processing system is called the system architecture. Such matters as the minimum addressable field in memory, the interrelations between data and instruction word size, the instruction format and length or lengths, parallel or serial (by bit or set of bits) ALU organization, decimal or binary internal organization, and so forth, are typical questions for the system architect. The answers depend heavily on the application for which the computer is intended. Two broad classes of computer systems are general-purpose and special-purpose types. Most systems are in the general-purpose class. They are used for business and scientific purposes. General-purpose computers of varying computer power and memory size can be grouped, sharing a common architecture. These are said to constitute a computer family.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.12

COMPUTER ORGANIZATION 18.12

DIGITAL COMPUTER SYSTEMS

A computer scientifically designed for, and dedicated to the control of, say, a complex refinery process is an example of a special-purpose system. A number of design methods have been adopted to increase the speed and functional range for a small increase in cost. For example, in the instruction sequence, the next cell in storage is likely to be the location of the next instruction. Since an instruction can usually be executed in a time that is short compared with storage access, the store is divided into subsections. Instructions are called from each subsection independently at high speed and put into a queue for execution. This type of operation is called look-ahead. If the instructions are not sequential, the queue is destroyed and a new queue put in its place. Since instructions and data tend to be clustered together in storage, it is advantageous to provide a small, high-speed store (local store) to work with a larger, slower-speed, lower-cost unit. If the programs in the local store need information from the larger store, a least-used piece of the local store reverts to the larger store and a batch of data surrounding the information sought is automatically brought into the high-speed unit. This arrangement is called a hierarchical memory and the high-speed store is often called a cache.

NUMBER SYSTEMS, ARITHMETIC, AND CODES Representation of Numbers A set of codes and names for numbers that meets the requirements of expendability and convenience of operation can be obtained using the following power series: N = AnXn + An–1X n–1 + · · · + A1X + A0 + A–1X –1 + · · · + A–mX –m

(1)

Here the number is represented by the sum of the powers of an integer X, each having a coefficient Ai. Ai may be an integer equal to or greater than zero and less than X. In the decimal system, X equals 10 and the coefficients Ai range from 0 to 9. Note that Eq. (1) can be used to represent X m+n+1 numbers ranging between 0 and X n+1 – Xm with an accuracy limited by Xm. Thus m and n must be of reasonable size to be useful in most applications. A useful property of the power series is the fact that its multiplication by Xk can be viewed as a shift of the coefficients of any given term by the number of positions specified by the value of k. These results are independent of the choice of X in the series representation. There is little reason to write the value of the number in the form shown in Eq. (1) since complete information on the value can be readily deduced from the coefficient Ai. Thus, a number can be represented merely by a sequence of the values of the coefficients. To determine the value of the implied exponents on X, it is customary to mark the position of the X0 term by a period immediately to the right of its coefficient. The power series for a number represented in the decimal system (X = 10) and its normal decimal notation are 3 × 103 + 0 × 102 + 2 × 101 + 4 × 100 + 6 × 10 –1 + 2 × 10 –2 = 3,024.62

(2)

The value of X is called the radix or base of the number system. Where ambiguity might arise, a subscript to indicate the radix is attached to the low-order digit, as in 10002 = 810 = 108 (1000 binary equals 8 decimal equals 10 octal). The power series for a number in base 2 and its representation in binary notation is 1 × 24 + 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 + 0 × 2 –1 + 1 × 2 –2 + 1 × 2 –3 = 11011.011

(3)

Number-System Conversions Since computer systems, in general, use number systems other than base 10, conversion from one system to another must be carried out frequently. Equation (4) shows the integer N represented by a power series in base 10 and base 2 n

m

Ai 10 i = ∑ B j 2 j ∑ i=0 j=0

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(4)

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.13

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.13

The problem is to find the correlation between the coefficients Ai and Bj. In the binary series, if N is divisible by 2, then B0 must be 0. Similarly, if N is divisible by 4, B1 must be 0, and so forth. Thus if the decimal coefficients Ai are given, successive divisions of the decimal number by 2 will yield the binary number, the binary digits depending on the value of the remainder of each successive division. This process is shown in Fig. 18.1.14. The conversion of a binary integer to a decimal integer is 100011011 = 1 × 28 + 0 × 27 + 0 × 26 + 0 × 25 = + 1 × 2 4 + 1 × 23 + 0 × 22 + 1 × 2 + 1 × 2 0 = 283 In the case of conversion of an integer in binary to an integer in decimal, the powers of 2 are written in decimal notation and a decimal sum is formed from the contribution of each term of the binary representation. For conversion from a binary fraction to a decimal fraction, a similar procedure is used since the value of terms as multiplied by the Ai can be added together in decimal form to form the decimal equivalent. The conversion of a decimal fraction to a binary fraction is defined by 0.576410 = A–12 –1 + A–22 –2 + · · · + An2 –n

(5)

To determine the values of the Ai, first multiply both sides of Eq. (27.5) by 2 to give 1.152810 = A–1 + A–22 –1 + · · · + A–n2n –1

(6)

Since the position of the decimal point (more accurately called the radix point) is invariant, and since in a binary series each successive term is at most half of the maximum value of the preceding term, the leading 1 in the decimal number in Eq. (6) indicates that A–1 must have been 1. A second multiplication by 2 can similarly determine the coefficient A–2. This process of conversion of a base-10 fraction to a base-2 fraction is illustrated in Fig. 18.1.15. Conversion from binary integers to octal (base 8) and the reverse can be handled simply since the octal base is a power of 2. Binary to octal conversion consists of grouping the terms of a binary number in threes and replacing the value of each group with its octal representation. The process works on either side of a decimal point. The octal-to-binary conversion is handled by converting each octal digit, in order, to binary and retaining the ordering of the resulting groups of three bits. Since there are not enough symbols in decimal notation to represent the 16 symbols required for the hexadecimal system, it is customary in the data processing field to use the first six letters of the alphabet to complete the set. Conversions from decimal to octal or hexadecimal can proceed indirectly by first converting decimal to binary and then binary to octal or hexadecimal. Similarly, a reverse path from octal or hexadecimal to binary to decimal can be used. Direct conversions, however, between hexadecimal and octal and decimal exist and are widely used. In going from hexadecimal or octal to decimal, each term in the implied power series is expressed directly in decimal, and the result is summed. In converting from a decimal integer to either hexadecimal or octal, the decimal is divided by either 16 or 8, respectively, and the remainder becomes the next higher-order digit in the converted number. Examples of four common number representations are shown in Table 18.1.1.

Binary-Arithmetic Operations Figure 18.1.16 shows an example of the addition of two binary numbers, 1001 and 1011 (9 and 11 in decimal). The rules for manipulation are similar to those in decimal arithmetic except that only the two symbols 1 and 0 are used and the addition and carry tables are greatly simplified. Figure 18.1.17 shows an example of binary multiplication with a multiplication table. This process is also simple compared with that used in the decimal system. The rule for multiplication in binary is as follows: if a particular digit in the multiplier is 1, place the multiplicand in the product register; if 0, do nothing; in either case shift the product register to the right by one position; repeat the operations for the next digit of the multiplier. Figure 18.1.18 shows an example of binary subtraction and the subtraction and borrow tables. The subtraction table is the same as the addition table, a feature unique to the binary system. The borrow operation is handled in

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.14

COMPUTER ORGANIZATION 18.14

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.14 Conversion from a decimal to binary by repeated division of the decimal integer. At each division the remainder becomes the next higher-order binary digit.

TABLE 18.1.1

FIGURE 18.1.15 Conversion of a decimal fraction into a binary fraction. At each stage the number to the right of the decimal is multiplied by 2. The resulting number to the left of the decimal point is entered as the next available lower-order position of the binary fraction to the right of the binary radix point.

Comparison of Decimal, Binary, Octal, and Hexadecimal Numbers

Decimal

Binary

Octal

Hexadecimal

Decimal

Binary

Octal

Hexadecimal

0 1 2 3 4 5 6 7

0 1 10 11 100 101 110 111

0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

8 9 10 11 12 13 14 15

1000 1001 1010 1011 1100 1101 1110 1111

10 11 12 13 14 15 16 17

8 9 A B C D E F

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.15

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

FIGURE 18.1.16 Binary addition and corresponding decimal addition.

18.15

FIGURE 18.1.17 Binary multiplication. The binary multiplication table is the AND function of two binary variables. The process of multiplication consists of merely replicating and adding the multiplicand, as shown, if a 1 is found in the multiplier. If 0 is found, a single 0 is entered and the next position to the left in the multiplier is taken up.

a fashion analogous to that in decimal. If a 1 is found in the preceding column of the subtrahend, it is borrowed, leaving a 0. If a 0 is found, an attempt is made to borrow from the next higher-order position, and so forth. An example of binary division is 101 101 11110 101 101 101 0

6 5 30

The procedure is as follows: 1. 2. 3. 4.

Compare the divisor with the leftmost bits of the dividend. If the divisor is greater, enter a 0 in the quotient and shift the dividend and quotient to the left. Try subtraction again. When the subtraction yields a positive result, i.e., the divisor is less than the bits in the dividend, enter a 1 in the quotient and shift the dividend and the quotient left one position. 5. Return to step 1 and repeat. Binary division, like binary multiplication, is considerably simpler than the decimal operation. Subtraction by Complement Addition If subtraction were performed by the usual method of borrowing from the next higher order, a separate subtraction circuit would be required. Subtraction can be performed, however, by the method of adding complements

FIGURE 18.1.18 Binary subtraction and corresponding decimal subtraction. The subtraction table is the same as the addition table. The borrow operation is handled analogously to decimal subtraction.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.16

COMPUTER ORGANIZATION 18.16

DIGITAL COMPUTER SYSTEMS

(or adding 1’s complements, as the method is also called). By this method, the subtrahend, i.e., the number that is to be subtracted, is inverted, changing the 0s to 1s and the 1s to 0s. Then the inverted subtrahend is added to the minuend, i.e., the number that is to be subtracted from, and an additional 1 is added to find the difference. As an example, consider the subtraction 1101 – 1001. The subtrahend (1001) is first inverted to form the complement (0110). The difference is formed by adding the minuend and the complement of the subtrahend (plus 0001) as follows: 1101 – 1001 = 1101 + 0110 (complement) + 0001 = (1)0011 + 0001 = 0100. Note that in subtraction by complement addition, a leading 1 (in parentheses) in the result (difference) must be suppressed, and that 1 must be added to obtain the result. The result of this operation can be verified by observing that the decimal equivalent of this operation is 13 – 9 = 3 + 1 = 4. Floating-Point Numbers In a computer having a fixed number of bits that define a word, the bits represent the maximum size of a numerical value. For example, if 40-bit positions are provided for a word, the maximum decimal number that can be represented is in the order of 1.009 × 10.12 Though this number is large, it does not suffice for many applications, especially in science, where a greater range of magnitudes may be routinely encountered. To extend the range of values that can be handled, numbers are represented in floating-point notation. In floating point the most significant digits of the number are written with an assumed radix point immediately to the left of the highest-order digit. This number is called the fraction. The intended position of the radix point is identified by a second number, called the characteristic, which is appended to the fraction: The characteristic denotes the number of positions that the assumed radix point must be shifted to achieve the intended number. For example, the number 146.754 in floating point might be 146754.03 where 146754 would be equivalent to 0.146754 and the 0.03 would denote a shift of the decimal point three places to the right. In binary notation the number 11011.011 (27.375 in decimal) might be represented in floating point as 11011011.101 with the fraction again to the left of the decimal and the characteristic to the right. With floating-point addition and subtraction, a shift register is required to align the radix points of the numbers. To perform multiplication or division, the fraction fields are appropriately multiplied or divided and the exponents summed or subtracted, respectively. As with fixed-point addition or subtraction, provision is usually made to detect an overflow condition in the characteristic fields. In some systems provision is made to note when an addition or subtraction occurs with such widely differing characteristics that justification destroys one of the two numbers (by shifting it out the end of a shift register).

Numeric and Alphanumeric Codes The numeric codes used to represent numerical values previously discussed include the hexadecimal, octal, binary, and decimal codes. In many applications the need arises for the coding of nonnumeric as well as numeric information, and such coding must use the binary scheme. A code embracing numbers, alphabetic characters, and special symbols is known as an alphanumeric code. A widely used code with its roots in the past is the telegraph code (the Baudot code). Other alphanumeric codes have been devised for special purposes. One of the most significant of these, because of its present use and its contribution to the design of other codes, is the Hollerith code, developed in the 1890s. Hollerith’s equipment contributed to the development of electromechanical accounting machines that provided the foundation for electronic computers. Another code of importance in the United States is the American Standard Code for Information Interchange (ASCII) (see Fig. 18.1.19). This code, developed by a committee of the American National Standards Institute (ANSI), has the advantage over most other codes of being contiguous, in the sense that the binary combination used to represent alphanumeric information is sequential. Hence alphabetic sorting can be easily accomplished by arithmetic manipulation of the code values. Codes used for data transmission generally have both data characters and control characters. The latter perform control functions on the machine receiving information. In more sophisticated codes, such as

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.17

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

FIGURE 18.1.19 sorting.

18.17

The ASCII code has a contiguous alphabet, so that numeric ordering permits alphabetic

ASCII, these control functions are greatly extended and hence are applicable to machines of different design. Other Numeric Codes. Not all numeric information is represented by binary numbers. Other codes are also used for numeric information in special applications. Figure 18.1.20 shows a widely used code called the reflected or Gray code. It has the property that only 1 b is changed between any two successive values, irrespective of number size. This code is used in digital-to-analog systems since there is no need for propagation of carry integers in sequential counting as in a binary code.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.18

COMPUTER ORGANIZATION 18.18

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.20 The Gray code, used in analog-todigital encoding systems. There is only a 1-b change between any two successive integers.

FIGURE 18.1.21 Two-dimensional parity checking, in which a single error can be corrected and a triple error detected.

Error Detection and Correction Codes The integrity of data in a computer system is of paramount importance because the serial nature of computation tends to propagate errors. Internal data transmission between computer-system units takes place repeatedly and at high speed. Data may also be sent over wires to remote terminals, printers, and other such equipment. Because imperfections of transmission channels inevitably produce some erroneous data, means must be provided to detect and correct errors whenever they occur. A basic procedure for error detection and correction is to design a code in which each word contains more bits than are needed to represent all the symbols used in a data set. If a bit sequence is found that is not among those assigned to the data symbols, an error is known to have occurred. One such commonly used error-detection code is called the parity check. Suppose that 8 bits are used to represent data and that an additional bit is reserved as a check bit. A simple electronic circuit can determine whether an even or odd number of 1 bits is included in the eight bit positions. If an even number exists, a 1 can be inserted in the check position. If an odd number of 1s exist, the check position contains a 0. As a result all code words must contain an odd number of 1 bits. If a 9-b sequence is found to contain an even number of 1s, an error can be presumed.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.19

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

FIGURE 18.1.22 A Hamming code. The parity bit in column 1 checks parity in columns 1, 3, 5, and 7; the bit in column 2 checks 2, 3, 6, and 7; and the bit in column 4 checks 4, 5, 6, and 7. The overlapping structure of the code permits the correction of a single error or the detection of single or double errors in any code word.

18.19

There are limitations in the use of the simple parity check as a mechanism for error detection, since in many transmission channels and storage systems there is a tendency for a failure to produce simultaneous errors in two adjacent positions. Such an error would not be detected since the parity of the code word would remain unchanged. To increase the power of the parity check, a list of code words in a two-dimensional array can be used, as shown in Fig. 18.1.21. The code words in the horizontal dimension have a parity bit added, and the list in the vertical dimension also has an added parity bit, in each column. If one bit is in error, errors appear in both the row and the column. If simultaneous errors occur in two adjacent positions of a code word, no parity error will show up in that row, but the column checks will detect two errors. This code can detect any 3-b errors. It is possible to design codes that can detect directly whether errors have occurred in two bit positions in a single code word. Figure 18.1.22 shows such a code, an example of a Hamming code. The code positions in columns 1, 2, and 4 are used to check the parity of the respective bit combinations. Two code words of a Hamming code must differ at three or more bit positions, and therefore any 2-b error patterns can be detected. The pattern formed by the particular parity bits that show errors indicates which bit is in error in the case of a single bit failure. In general, if two code words must differ at D or more bit positions, the code can detect up to D – 1 bit errors. For D = 2t + 1, the code can detect 2t bit errors or correct t bit errors.

COMPUTER ORGANIZATION AND ARCHITECTURE Introduction Over the past 40 years great progress has been made as component technology has moved from vacuum tubes to solid-state devices to large-scale integration (LSI). This has been achieved as a result of increased understanding of semiconductor materials along with improvements in the fabrication processes. The result has been significant enhancement in the performance of the logic and memory components used in computer construction along with significant reductions in cost and size. Figure 18.1.23, for example, indicates the reduction in volume of main memory that has occurred in the last 40 years. The volume required to store 1 million characters has been reduced by a factor of 3 million in that period. Smilarly, during that same period, the system cost to execute a specific mix of instructions has also decreased by a factor of 5000. Integrated circuit manufacturers are able to incorporate millions of transistors in the area of a square inch. Large-scale integration (LSI) and VLSI have led to small-size, lower cost, large-memory, ultrafast computers ranging from the familiar PC to the high-performance, high-priced supercomputers. There is no reason to expect that this progress will not continue into the future as new technology improvements continue to occur. These advances in component technology have also had a major impact on computer organization and its realization. Functions and features that were too expensive to be included in earlier designs are now feasible, and the trade-offs between software and hardware need to be reevaluated as hardware costs continue to decrease. New approaches to computer organization must also be considered as technology continues to improve. The advances in component technology also have had a major impact on such aspects of computer realization as packaging, coding, and power. Basic Computer Organization The basic organization of a digital computer is shown in the block diagram of Fig. 18.1.24. This structure was proposed in 1946 by von Neumann. It is a tribute to his genius that this design, which was intended for use in

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.20

COMPUTER ORGANIZATION 18.20

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.23 The reduction by a factor of 6400 in memory size from the first- to the fourthgeneration families of IBM computers.

solving differential equations, has also been applicable in solving other types of problems in such diverse areas as business data processing and real-time control. Von Neumann recognized the value of maintaining both data and computer instructions in storage and in being able to modify instructions as well as data. He recognized the importance of branch or jump instructions to alter the sequence of control of computer execution. His contributions were so significant that the vast majority of computers in use today are based on his design and are called von Neumann computers. The four basic elements of the digital computer are its main storage, control unit, arithmetic-logic unit (ALU), and input/output (I/O). These elements are interconnected as shown in Fig. 18.1.24. The ALU, or processor, when combined with the control unit, is referred to as the central processing unit (CPU). Main storage provides the computer with directly addressable fast-access storage of data. The storage unit stores programs as well as input, output, and intermediate data. Both data and programs must be loaded into main storage from input devices before they can be processed. The control unit is the controlling center of the computer. It supervises the flow of information between the various units. It contains the sequencing and processing controls for instruction decoding and execution and for handling interrupts. It controls the timing of the computer and provides other system-related functions. The ALU carries out the processing tasks specified by the instruction set. It performs various arithmetic operations as well as logical operations and other data processing tasks. Input/output devices, which permit the computer to interact with users and the external world, include such equipment as card readers and punches, magnetic-tape units, disc storage FIGURE 18.1.24 Block diagram of a digital comunits, display devices, keyboard terminals, printers, teleproputer illustrating the main elements of the von cessing devices, and sensor-based devices. Neumann architecture.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.21

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.21

FIGURE 18.1.25 Basic structure of a digital computer showing a typical register organization and interconnections.

Detailed Computer Organization The block diagram of Fig. 18.1.25 provides an overview of the basic structure of the digital computer. Computer systems are complex, and block diagrams cannot describe the computer in sufficient detail for most purposes. One is therefore forced to go to lower levels of description. There are at least five levels66a that can describe the implementation of a computer system: 1. 2. 3. 4. 5.

Processor-memory-switch (block-diagram) level Programming level (including the operating system) Register-transfer level Switching-circuits level Circuit or realization level

Each of these levels is an abstraction of the levels beneath it. A number of computer-hardware description languages have been developed to represent the components used in each level along with their modes of combination and their behavior. A register is a device capable of receiving information, holding it, and transferring it as directed by control circuits. The actual realization of registers can take a number of forms depending on the technology used. Registers store data temporarily during a program’s execution. Some registers are accessible to the user through instructions. Registers are found in every element of the computer system. They are an integral part of main storage, being used as storage registers to contain the information being transferred from memory (read) or into memory (write) as well as storage address registers (SAR) to hold the address of the location in storage involved in the information transfer. In the control unit, the instruction (or program) counter contains the storage address of the instruction to be executed while the instruction register holds the instruction being decoded and executed. In the ALU internal registers are used to hold the operands and partial results while arithmetic and logical operations are being performed. Other ALU registers, called general-purpose registers, are used to accumulate

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.22

COMPUTER ORGANIZATION 18.22

DIGITAL COMPUTER SYSTEMS

the results of arithmetic operations but can be used for other purposes such as indexing, addressing, counting looping operations, or subroutine linkage. In addition, floating-point registers may be provided to hold the operands and accumulate the results of arithmetic operations on floating-point numbers. Of all the registers mentioned, only the general-purpose and floating-point registers are accessible to program control and to the programmer. Figure 18.1.25 shows the primary registers and their interconnections in the basic digital computer. At the register transfer level of abstraction one can describe a digital computer as a collection of registers between which data can be transferred. Logical operations can be applied to the data during the transfers. The sequencing and timing of the transfers are scheduled and controlled by logic circuits in the control unit. The data transferred between registers within the computer consist of groups of binary digits. The number of bits in the group is determined by the computer architecture and in particular by the organization of its main storage. Main storage is structured into segments, called bytes, and each storage word is uniquely identified by a number, called its address, assigned to it. A byte consists of 8 binary bits and is the standard for describing memory elements. Many computers read groups of bytes (2, 4, 6, 8) from memory in one access. A group of bytes is referred to as a word, which can vary in length from one computer to another. A memory access is a sequence to read data from memory or store it into memory. The 1s and 0s can be interpreted in various ways. These bit patterns can be interpreted as: (1) a pure binary word, (2) a signed binary number, (3) a floating-point number, (4) a binary-coded decimal number, (5) data characters, or (6) an instruction word. In a signed binary number the high-order (leftmost) bit indicates the sign of the number. If the bit is 0, the number is positive; a 1 indicates it is negative. Thus 0bbbbbbb represents a positive 7-bit number 1bbbbbbb represents a negative 7-bit number A negative number is carried in 2’s-complement (inverted) form. For example, 111111102 = 210 A binary-coded decimal code uses 4 b to represent a decimal digit. It uses the binary digit combinations for 0 to 9; combinations greater than 9 are not allowed. Thus 01012 = 510

10102 = illegal

The sign of the decimal number can be indicated in several ways. One technique uses the low-order bits to indicate the sign; for example, bbbbbbbbbbbb1100 represents a positive 3-digit number bbbbbbbbbbbb1011 represents a negative 3-digit number Thus, 00010101001110112 = –15310. For external communication, as well as text processing and other nonnumeric functions, the digital computer must be able to handle character sets. The byte has been accepted as the data unit for representing character codes. The two most common codes, described earlier, are the American Standard Code for Information Interchange (ASCII) and the Extended Binary-Coded Decimal Interchange Code (EBCDIC). The 16-b word 11000111110010110 coded in EBCDIC represents the two-letter word “go.” (See Figs. 18.1.4, 18.1.5, and 18.1.9). The instruction word is composed of two major parts, an operation part and an operand part. The length of the instruction word is determined by the computer architecture. Some computers have a single format for their instruction words and thus a single length, whereas other computers have several formats and several different lengths. The operation part consists of the operation code that describes the particular operation to be performed by the computer as a result of executing that instruction. The operand part usually contains the addresses of the two operands involved in the operation. For example, the RR (register-to-register) instruction format in the System/370 is

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.23

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

Op Code

Reg 1

Reg 2

bbbbbbbb

bbbb

bbbb

0 7

8 11

12 15

18.23

The instruction 1AB4 (in hexadecimal), for example, instructs the computer to add the contents of register 11 to the contents of register 4 and to put the resulting sum in register 11, replacing its original contents. Most computers use the two-address instruction with three sections: The first section consists of the op-code and the second and third sections each contain an address of an operand. Different computers use these addresses differently. In some cases, the operand section is used as a modifier or to extend the instruction length. A singleoperation computer has two sections—an op code and an operand, usually a memory address, with the accumulator the implied source or destination. Two facts should be noted. First, the discussion thus far may have implied that digital computers can deal only with fixed-length words. That is true for some computers, but other families of computers can also deal with variable-length words. For these, the operand part of the instruction contains the address of the first digit or character of each variable-length word plus a measure of its length, i.e., the number of characters it contains. The second fact is that it is impossible to distinguish between the various data representations when they are stored. For example, there is nothing to indicate whether a word of memory contains a binary number or a binary-coded decimal (BCD) number. Programmers must make the distinction in the programs they develop and not attempt meaningless operations such as adding a binary number to a decimal number. The only way the computer distinguishes an instruction word from other data words is by the time when, as discussed in the next section, it is read from storage into the control unit.

Instruction Execution The digital computer operates in a cyclic fashion. Each cycle is called a machine cycle and consists of two main subcycles, the instruction (I) cycle (sometimes called the fetch cycle) and the execution (E) cycle. During the machine cycle, the following basic steps occur in sequence (see Fig. 18.1.25): 1. The cycle begins with the I cycle: a. The contents of the instruction counter are transferred to the storage address register (SAR). (The instruction counter holds the address of the next instruction to be executed.) b. The specified word is transferred from storage to the instruction register. (The control unit assumes that this storage word is an instruction.) c. The contents of the instruction register are decoded by logical circuits in the control unit. This identifies the type of operation to be performed and the locations of the operands to be used in the operation. 2. At this point, the E cycle begins: a. The specified computer operation is performed using the designated operands, and the result is transferred to the location indicated by the instruction. b. The instruction counter is advanced to the address of the next instruction in the sequence. (If a branch, or change in execution control sequence, is to occur, the contents of the instruction counter are replaced by an address as directed by the instruction currently being executed.) 3. At this point, the I cycle is repeated. To indicate in more specific terms what happens in the CPU during instruction execution it is necessary to go to the switching level of description. The following paragraphs describe of the operations of the ALU and the control section in more detail.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.24

COMPUTER ORGANIZATION 18.24

DIGITAL COMPUTER SYSTEMS

Arithmetic Logic Unit The ALU performs arithmetic and logical operations between two operands, such as OR, AND, EXCLUSIVE-OR, ADD, MULTIPLY, SUBTRACT, or DIVIDE. The unit may also perform operations such as INVERT on only one operand, and it tests for minus or zero and forms a complement. Adders and multipliers are at the heart of the ALU. In Fig. 18.1.26 one bit position of an ALU is shown as part of a wider data path. One latch (part of a register A) feeds an AND circuit that is conditioned by a CONTROL A. The output feeds INPUT A of the adder circuits. One latch of register B is also ANDed with CONROL B and feeds the other input into the adder. A true-complement circuit is shown on the B line. This latter circuit has to do with subtraction and can be assumed to be a direct connection when adding. Each adder stage is a combinatorial circuit that accepts a carry from the stage representing the next lower digit in the binary number (assumed to be on the right). The collection of outputs from all adder stages is the sum. This sum is ANDed into register D by CONTROL D.

FIGURE 18.1.26 Basic addition logic. The heart of the ALU is the adder circuit shown in functional form in (a). The control section applies the appropriate time sequence of pulses (b) on the control to perform addition. Heavy lines indicate a repeat of circuitry in each bit position to form a machine adder.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.25

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.25

All bit positions of each of the registers are gated by a single control line. If the gate is closed (control equal to 0), all outputs are 0s. If the gate is open (control equal to 1), the bit pattern appearing in the register is transmitted through the series of AND circuits to the input of the adders. Thus, a gate is a two-way AND circuit for each bit position. The diagram of Fig. 18.1.26 illustrates all positions of an n-position adder since all positions are identical. In such a case heavy lines, as shown, indicate that this one line represents a line in each bit position.

Binary Addition At the outset of an addition, it is assumed that registers A and B (Fig. 18.1.26a) contain the addends. An addition is performed by pulsing the control lines with signals originating in the control section of the CPU. Time is assumed to be metered into fixed intervals by an oscillator (clock) in the control section. These time slots are numbered for easier identification (Fig. 18.1.26b). At time 1 the inputs are gated to the adder, and the adders begin to compute the sum. At the same time, register D is reset (all latches set to 0). At time 2 the outputs of the adders have reached steady state, and control line D is raised, permitting those bit positions for which the sum was 1 to set the corresponding latches in register D. Between times 2 and 3, the result is latched up in register D, and at time 3, control D is lowered. Only after the result is locked into D and cannot change, may control A and B be lowered. If they were lowered earlier, the change might propagate through the adder to produce an incorrect answer. The length of the pulses depends on the circuits used. The times from 2 to 3 and 3 to 4 are usually equal to a few logic delays (time to propagate through an AND, OR, or INVERTER). The time from 1 to 2 depends on the length of the adder and is proportional to the number of positions in a parallel adder because of potential carrypropagation times. This delay can be reduced by carry look-ahead (some-times called carry bypass, or carry anticipation).

Binary Subtraction Subtraction can be accomplished using the operation of addition, by forming the complement of a number. Negative numbers are represented throughout the system in complement form. To subtract two numbers such as B from A, a set of logic elements may be put into the line shown in Fig. 18.1.26a an input B. Using 2’s complement, the sign of a number is changed by complementing each bit and adding 1 to the result. The inversion of the bit is performed by the logic element interposed on the input B line in Fig. 18.1.26a known as a truecomplement (T/C) gate. This unit gates the unmodified bit if the control is 0 and inverts each bit if the control is 1. The boolean equation for the output of the T/C gate is Output = T/C ⋅ B + T /C ⋅ B The T/C gate is a series of EXCLUSIVE-ORS with one leg, common to all bit positions, connected to the T/C control line. The other leg of each EXCLUSIVE-OR is connected to one bit of the circuit containing the number to be complemented. The T/C gate produces the 1’s complement; a 1 must be added in the low-bit position to produce the true complement. The low stage of an adder may be designed to have an input for a carry-in, designed to accommodate the 1 bit automatically produced from the high-order position of the T/C gate. Such a logical interconnection accomplishes the required 1 input for a true-complement system when a positive number B is subtracted from a positive number and is called an end-around carry. Consistency of operation is obtained by entering the appropriate high-order T/C gate into the low-order carry position.

Decimal Addition In some systems the internal organization of a computer is such that decimal representations are used in arithmetic operations. In BCD, a conventional binary adder can be used to add decimal digits with a small amount of additional hardware. Adding two 4-b binary numbers produces a 4- or 5-b binary result. When two BCD

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.26

COMPUTER ORGANIZATION 18.26

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.27 Binary adder with decimal-add feature. If the input of the binary add is 1010 or greater, the addition of a binary 6 produces the correct result by modulo-10 addition.

numbers are added, the result is correct if it lies in the range 0 to 9. If the result is greater than 9, that is the resulting bit pattern is 1010 to 10010, the answer must be adjusted by adding 6 to that group of 4 b, this number being the difference between the desired base (10) and the actual base (24 = 16). The binary carry from a block of 4 b must also be adjusted to generate the appropriate decimal carry. The circuits to accomplish decimal addition are shown in Fig. 18.1.27. A test circuit generates an output if the binary sum is 1010 or greater. This output causes a 6 to be added into the sum and also is ORed with the original binary carry to produce a decimal carry. The added circuits needed to perform decimal additions with a binary adder represent one-half to two-thirds of the circuits of the original binary adder. Decimal Subtraction In most computers that provide for decimal operation, decimal numbers are stored with a sign and magnitude. To perform subtraction the true complement is formed by subtracting each decimal digit in the number from 9 and adding back 1. Once the complement is formed, addition produces the desired difference. In a machine that provides for decimal arithmetic a decimal true-complement switch may be incorporated in each group of four BCD bits to form the complement. As with binary, provision must be made for the addition of an appropriate low-order digit and for the occurrence of overflows. Shifting All computers have shift instructions in their instruction repertoire. Shifting is required, for example, for multiply and divide operations.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.27

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.27

FIGURE 18.1.28 Shift gates. In many systems shifting is accomplished in conjunction with the output of the adder, i.e., position shifts can be accomplished with only one circuit delay.

The minimum is a shift of one position left or right, but most computers have circuits permitting a shift of one or more bits at a time, that is, 1, 4, or 8. In shifting, a problem arises with the bit(s) shifted out at the end of the register and the new (open) bit position(s). Those shifted out at one end can be inserted in the other end (referred to as end-around shift or circulate), or they can be discarded or placed in a special register. The newly created vacancies can be filled with all 0s, all 1s, or from another special register. In a typical computer, shifting is not performed in a shift register but with a set of gates that follow the adders. The outputs of the shift gates are connected to the output register, with an offset by providing a separate gate for each distance and direction of shift. One bit position of the output register can therefore be fed from several adder outputs, but only one gate is opened at a time, as in Fig. 18.1.28. The pattern shown is repeated for every bit position.

Multiplication Figure 18.1.29 shows a possible data-flow system for multiplication, i.e., the data flow of the adder shown in Fig. 18.1.26 with the addition of register C to hold the multiplier and the shift registers as shown in Fig. 18.1.28. An extra register E holds any extra bits that might be generated in the process of multiplication. Register E in the particular system shown also receives the contents of C after transmission through C’s shift register. Computers also have an extension of the shift instruction called rotate. A shift usually occurs as a result, but the leading bit of the shift right or left is rotated to the opposite end of the register. This feature can be used for bit detection without losing data. The process of binary multiplication involves decoding successive bits in the multiplier, starting with its lowest-order position. If the bit is a 1, the multiplier is added into an accumulating sum; if 0, no addition takes place. In either case the sum is moved one position to the right and the next higher-order bit in the multiplier considered. In Fig. 18.1.29 the multiplier is stored in register C, the multiplicand in A, and all others are reset to zero. If the low-order position of C is a 1, the contents of B and C are added and shifted one position to the right into register D. After the addition, register C is shifted one position to the right and stored in register E. If the low-order bit in C had been a zero, only register B would have been gated into the adders; i.e., the addition of the contents of A would have been suppressed, but subsequent operations would have remained the same.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.28

COMPUTER ORGANIZATION 18.28

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.29 Data flow for multiplication.

Each add and shift operation subsequent to an add may generate low-order bits that may be shifted into register E, since as the contents of C are shifted to the right, unused positions become successively available. After the add and shift cycles, registers D and E are transferred into B and C, respectively, and the process is repeated until all positions of the multiplier are used. The content of D and E is the product.

Division To provide for the division of two numbers the functions of the registers in Fig. 18.1.29 must be rearranged as shown in Fig. 18.1.30. A gating loop is provided from the shift register to register A. Initially the divisor is placed in register B and the dividend in A. The T/C gate is used to subtract B from A, and if the result is zero or positive, a 1 is placed in the low-order position of register E. If the result is negative, a 0 is placed in E and the input from B ungated to reset to the contents of A. The shift register then shifts the output of the adder one position to the right and gates it back to A. E is gated through C and shifted on to the right. The whole process is repeated until the dividend is exhausted. E contains the quotient and A the remainder.

Floating-Point Operations In some applications, it is convenient to represent numerical values in floating-point form. Such numbers consist of a fraction that represents the number’s most significant digits, in a portion of the field of a word in the machine, and a characteristic in the remaining portion. The characteristic denotes the decimal-point position

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.29

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.29

FIGURE 18.1.30 Data flow for division. B holds the divisor and A the dividend. A trial subtraction is made of the high-order bits of B from A, and if the results are 0 or positive, a 1 is entered into the lower-order bit of E and the result of the subtraction shifted left and reentered into A (with gate A closed). If the result had been negative the B gate would have closed to negate the subtraction, and a 0 would have been entered into E. The output of the adder would then have been shifted left one position and reentered in A.

relative to the assumed decimal point in the characteristic field. In floating-point addition and subtraction, justification of the fractions according to the contents of the characteristic field must take place before the operation is performed; i.e., the decimal points must be lined up. In an ALU such as in Fig. 18.1.29, the operation proceeds in the following way. Two numbers A and B are placed in registers A and B, respectively. The control selection then gates only the characteristic fields into the adder, in a subtract mode of operation. The absolute difference is stored in an appropriate position of the control section. Controlled by the sign of the subtraction operation, the fraction of the least number is placed through the adder into the shift register. The control section then shifts the fraction the required number of positions, i.e., according to the stored difference of characteristic fields, and places the result back in the appropriate register. Addition or subtraction can then proceed according to the generating machine instruction. This procedure is costly of machine time. Instead of using the ALU adder for the characteristic-field difference, provision can be made for subtraction of the characteristics in the control section. In such a case, only the characteristics A and B need be entered into registers A and B and the lesser number can be placed in B so that shifting can be accomplished by the circuits normally used in multiplication.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.30

COMPUTER ORGANIZATION 18.30

DIGITAL COMPUTER SYSTEMS

Control Section The control section is the part of a CPU that initiates, executes, and monitors the system’s operations. It contains clocking circuits and timing rings for opening and closing gates in the ALU. It fetches instructions from main storage, controls execution of these instructions, and maintains the address for the next instruction to be executed. The ALU also initiates I/O operations. Basic Timing Circuits Basic to any control unit is a continuously running oscillator. The speed of this oscillator depends on the type of computer (parallel or serial), the speed of the logic circuits, the type of gating used (the type of registers), and the number of logic levels between registers. The oscillator pulses are usually grouped to form the basic operating cycle of the computer, referred to as machine cycles. In this example four pulses are combined into such a group. In Fig. 18.1.31 an oscillator is shown that drives a four-stage ring. At any one time, only one stage is on. Suppose an addition is to be performed between registers A and B and the result is to be placed back in B. The addition circuitry described in Fig. 18.1.29 uses three registers, two for the addends (registers A and B), and one for temporary storage (register D). The operation to be performed is to add the content of register A to the content of register B and store the result in register B. Register D is required for temporary storage since if B were connected back on itself, an unstable situation would exist; i.e., its output (modified by A) would feed its input. The operation of the four-stage clock ring in controlling the addition and transfer is shown in Fig. 18.1.32. Action is initiated by the coincidence of an ADD signal and clock pulse A, which starts the add ring. This latter ring has two stages and advances to FIGURE 18.1.31 Oscillator and ring circuit. the next stage upon occurrence of the next A clock pulse. The timMany control functions can be performed by ing chart in Fig. 18.1.32 describes one sequence of actions using an oscillator in conjunction with a timing required and the gates needed for the addition. The circuit diaring that sequentially sends signals on separate gram shows a realization of these gates. Each register is reset lines, in synchronism with the oscillator. before it is reused, and all pulses are derived from the four basic clock pulses shown in Fig. 18.1.31. The add ring initiates the add by opening the gates between registers A and B and the adder. The add latch is then reset, D is reset, the ring stage transferred, and so forth. An ADD-FINISHED signal is furnished, and may be used elsewhere in the system. In the timing diagram, it is assumed that the time required for transmission through the adder is about one machine cycle and that one clock cycle is sufficient for the signal to propagate through the necessary gating and set the information in the target register. These times must include the delay in the circuits and any signalpropagation delay. Control of Instruction Execution The following approach for the design of a control section is straightforward but extravagant of hardware. For each instruction in the computer a timing diagram is developed similar to the one shown for the add operation in the previous section. These timings are implemented in rings that vary in length, according to the complexity of the instruction. The concept is simple and has been widely used, but it is costly. To reduce cost, rings are used repeatedly within one instruction and/or by several instructions. Subtraction, for example, might use the ADD ring, except for an extra latch that might be set at ring time 1, clock time A (denoted 1.A) and reset at 2.C. This new latch feeds the T/C gates. Another latch that might be added to denote decimal arithmetic would also be set at 1. A time and reset at 2.C time. The addition of two latches and a few additional ANDS and ORS permits the elimination

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.31

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.31

FIGURE 18.1.32 Control circuit (b) and timing chart (a) for addition. The timing ring shown in Fig. 18.1.31 to control the adder circuit shown in Fig. 18.1.29 with the help of additional switching circuits.

of three two-stage rings and associated logic. Further reductions in the number of required circuits can be achieved by considering the iterative nature of some instructions, such as multiplication, division, or multiple shifting. Controls Using Counters (Multiplication) To exemplify this approach multiplication is considered. Multiplication can be implemented as many add-shift cycles, one per digit in the multiplier (say N ). The control for such an instruction can be implemented using one 2N + 1 position ring, the first position initializing and the next 2N positions for N add-shift cycles (two positions are needed per add cycle). Such an approach requires not only an unnecessarily long ring but is relatively inflexible for other than N-b multipliers.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.32

COMPUTER ORGANIZATION 18.32

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.33 Control circuit for multiplication. Repetitive operations such as add and shift are combined in cyclical fashion. The timing diagram of Fig. 18.1.32a is assumed.

The alternate approach, requiring considerably less hardware, uses the basic operation in multiplication, an add and a shift. Therefore a multiply can be implemented by using the controls for the add-shift instruction, plus a binary counter, plus some assorted logic gates to control the gates unique to the multiply, as in Fig. 18.1.33. In the figure some of the less important controls needed for multiply have not been included to simplify the presentation. For example, the NO SHIFT signal for add must be conditioned with a signal that multiply is not in progress, and during multiplication an ADD-FINISHED signal should be ignored (nor start an I cycle), the reset B also resets C, the reset D also resets E, gate D to B also gates E to C, gating of A to the adder is conditioned on the last bit of C, and so forth. In Fig. 18.1.33 the action is started by raising the MPY line that sets a multiply latch. This in turn sets the binary counter to the desired cycle count. Also set is the ADD-LATCH, and the addition cycles start. When the counter goes to 0, the ADD-LATCH is no longer set and the MULTIPLY-FINISHED signal is raised.

Microprogramming The control of the E cycle, in the preceding descriptions, is performed by a set of sequential circuits designed into the machine. Once an execution is initiated, the clocking circuits complete the operation, through wired

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.33

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.33

paths in a specific manner. An alternative method of design for a control unit is microprogramming. The concept is not sharply defined but has as its objective implementation of the control section at low cost and with high flexibility. In many cases it is desirable to design compatible computers, i.e., units with the same instruction set, with widely varying performance and cost. To provide a slower version at a lower cost, the width of the data path is reduced to lower circuit counts in the ALU. On the other hand, to operate with the same instruction set, the reduced data-path width usually implies more iterative operations in the control section, at added cost. Considerable investment is required for programming development, and normally such systems run only on computers with identical instruction sets. If appropriate flexibility is provided, one computer can mimic the instruction set of another. Microprogramming provides for such operations. The process by which one system mimics another is called emulation. Control lines are activated not by logic gates in conjunction with counters but by words in a storage system that are translated directly into electric signals. The words selected are under the control of the instruction decoder, but the sequence of words may be controlled by the words themselves by provisions of a field that does not directly control gates but specifies the location of the next word. The name given to these control words is microinstruction as opposed to machine instruction or instruction. When two computers are designed for the same instruction set but with different data-path widths, the microinstruction sets of the two computers are radically different. For the small computer, the program for a given machine instruction is considerably longer than that for the large computer. The same instructions can be used in both computers. The difference in control-system cost between the two is not large. Although the microprogram is longer with the smaller computer, the difference is in the number of storage places provided, not in the control-section hardware. The design of the sequence of microprogramming words is conceptually little different from other programming. The microprogram implementation, however, requires a thorough knowledge of a machine at the gate level. A microinstruction counter is used to remember the location of the next microinstruction, or a field can be provided to allow branching or specification of a next instruction. A microprogram generally does not reside in main store but in a special unit called a control store. This may not be addressable by the user, or it may be a separate storage unit. In many cases, the microprogram is stored in a read-only store (ROS); i.e., it is written at the factory. ROS units are faster and may be cheaper per bit of storage and do not require reloading when power is applied to the computer. Alternatively the microprogram may be stored in medium that can be written into, called a writable control store (WCS). By reloading the WCS, entirely different macroinstruction can be implemented, using the same microinstruction set in a different microprogram. By such means emulation is achieved at minimal expense.

CPU Microprogramming Figure 18.1.34 shows an ALU and Fig. 18.1.35 a control section with microprogram organization. The microinstructions embodied in these two units are shown in Table 18.1.2. To simplify the program several provisions have been made: 1. Each microinstruction contains the control-store address of the next microinstruction. If omitted, the address in one microinstruction is the current address of the one written just below it. It may not be next in numeric sequence. 2. An asterisk at the beginning of a line in a program indicates a comment. This means that the entire line contains information about the program and does not translate into a microinstruction. 3. To simplify the drawing, a gate in a path is indicated by placing an X in the line and omitting from the drawing any control lines controlling these gates. It is also assumed that where two lines join, OR circuits are implied. 4. Rather than listing a numeric value for each field, a shorthand description for the desired action is invented. All actions not so described are assumed to be zero. For example, A to ADD implies that register A is gated to the adder, or T/C means raise T/C gate. These charges do not in any way modify the concept of microprogramming but make the result more readable.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.34

COMPUTER ORGANIZATION 18.34

DIGITAL COMPUTER SYSTEMS

FIGURE 18.1.34 Microprogrammed ALU.

FIGURE 18.1.35 Microprogrammed control unit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.35

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

TABLE 18.1.2

Microinstructions Embodied in an ALU and a Control Section

Current address 51 8 9

52

53

Microinstruction •ADD B to ADD, C to ADD, NO-SHIFT E to B A(2) to MS-ADR-REG, B to MS, GO TO 1

•SUBTRACT B to ADD, C to ADD, NO-SHIFT T/C, GO TO 8 •BRANCH (unconditional) A(3) to 1C, GO TO 1

25 26 27 28

•MULTIPLY Set an N into counter, C to ADD, NO-SHIFT E to D, set C to 0s If last bit D = 1, then (B to ADD), C to ADD shift-R 1, 0 to input of high end of shifter, output of low end of shifter to F E to C, F to G, COUNT down by 1 D to ADD, SHIFT-R1, G to input of high end of shifter E to D, if counter is not 0 then GO TO 17 C to ADD E to B, force 1 into C A(2) to ADD, C to ADD NO-SHIFT; A(2) to MS-ADR-REG B to MS E to A(2) D to ADD, NO-SHIFT E to B A(2) to MS-ADR-REG, B to MS GOT TO 1

55

•SHIFT LEFT A(2) to COUNTER, C to ADD, NO-SHIFT

10 11 12 13

If COUNTER = 0 then GO TO 9 E to B B to ADD, SHIFT L1, COUNT down by 1 IF COUNT not 0, then GO TO 11 E to B GO TO 9

54 17 18

19 20 21 22 23 24

18.35

Comment Add two operands Store result branch to 1, next microinstruction executed to be taken from control-store location 1 Go to 9, where result is stored

This is the macroinstruction branch Initialize Perform one add shift if last bit D = 1, only shift if last D = 0 Increment counter Shift D Close loop Store result in two MS locations Store first half of result, increment result address

Store second half of result Branch to 1-fetch Operand 1 is number of bits operand 2 is to be shifted Test if shift count was 0 Shift loop Completed shift

NOTE 1: (Location 7) This special test places the op-code bit pattern into the low part of the address for the next instruction, causing a branch to the appropriate microroutine for each op code. Branches are as follows:

Op code 1 2 3 4 5

Instruction ADD SUBTRACT BRANCH MULTIPLY SHIFT LEFT

Address 51 52 53 54 55

NOTE 2: (Location 1) It is assumed that START button sets microinstruction counter to 1.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.36

COMPUTER ORGANIZATION 18.36

DIGITAL COMPUTER SYSTEMS

Instruction FETCH In the preceding paragraphs the execution of instructions (E cycles) is discussed. In these cases, operations are initiated by setting an appropriate latch for the function to be performed. The signals that set the latch are in turn generated by circuits that interpret the information of the operation-code part of an instruction cycle (I cycle). Whenever an instruction has completed execution (or when the computer operator presses the start button on the console), a start-cycle signal is generated at the next A time of the master-clock timing ring. This signal starts the I-cycle ring. The first action of this ring is to fetch the instruction to be executed from the main store by gating the instruction counter (IC) (sometimes called program counter) to the address lines of main storage and initiating a main-store cycle by pulsing a line called start MS. These operations are illustrated in Fig. 18.1.36. At ring time 2, the instruction arrives back and is placed in the instruction register (IR). The instruction typically contains three main fields: the operation code (op code),

FIGURE 18.1.36 Implementation of equipment for an I cycle in a two-address machine. The instruction is first brought from main store and deposited in the instruction register. The operation proceeds by successively gating the information associated with two address fields into register A. The first is moved out of A, while the second is being sought from main store.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.37

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.37

that determines what instructions is to be executed (ADD, SUBTRACT, MULTIPLY, DIVIDE, BRANCH), and the two addresses of the two operands participating in the operation. For certain classes of instruction, the operands must be delivered to appropriate locations before the E cycle begins. During ring time 3 and 4, the first operand is fetched from main store and stored in A. During ring time 5, this operand must be transferred from A to an alternate location, depending on the nature of the instruction being executed. During ring time 5 and 6, the second operand is fetched and stored in A. Ring time 6 is also used to gate the op code to the instruction decoder, which is a combinatorial logic circuit accepting the operation code, or instruction code, from the P b in the op code. The decoder has 2P input wires, one for each unique input combination. Thus for each bit combination entering the decoder, one output line is activated. These signals represent the start of an E cycle with each wire initiating the execution of one instruction by setting some latch, e.g., an add or multiply latch, or by initiating an appropriate microprogram. Some op-code bit combinations may not be valid instruction codes. The outputs of the decoder are ORed together and fed into a circuit that ultimately interrupts the normal processing of the computer and indicates the invalid condition. At the beginning of the I cycle, the content of the instruction counter (IC) points to the instruction to be executed next. An increment-counter signal is generated during the I cycle in order to increment the counter, so that the address stored in IC points to the next sequential instruction in the program. Instruction- and Execution-Cycle Control In normal program operation, I and E cycles alternate. The I cycle brings forth an instruction from storage, sets up the ALU for execution, resets the instruction counter for the next I cycle, and initiates an E cycle. A number of conditions serve to interrupt this orderly flow, as follows: 1. When no more work is to be done, the end of a program is reached and the computer goes to a WAIT state. Such a state is reached, for example, by a specific instruction that terminates operations at the end of the I cycle in question, by a signal from the instruction counter when a predetermined limit is reached. 2. A STOP button may prevent the next setting of the Start I cycle hatch. A START button resets this latch. 3. When starting up after a shutdown, e.g., in the morning, activity is usually initiated by depressing an INITIAL PROGRAM LOAD button on the operator’s console (the name varies from system to system, e.g., IPL, LOAD, START). The button usually performs three functions: a reset of all rings, latches, and registers to some predefined initial condition; a read-in operation from an I/O device into main store (usually the first few locations) of a short program; and an initiation of an I cycle. Program execution generally starts at a fixed location so that the IC initially is set to this value. In most computers, including PCs, this value is called IMPL (initial microprogrammed program load), which initializes the machine and performs selftesting on memory, I/O ports, and internal status conditions. 4. In multiprogramming, i.e., concurrent operations on more than one program, only one program at a time is in operation in the CPU, but transfers occur from one to another as required by I/O accesses, and so forth. The program to be transferred is handled by an interrupt. Under interrupt, an address is forced into the instruction counter so that on completion of the E cycle of the current program, a new instruction is referenced that starts the interruption. This instruction initiates program steps that store the data of the old program, e.g., in special registers or in special main-store locations. The contents of the IC, part of the IR, the contents of any registers, and the reason for the interruption are stored. The collection of these fields together is called a program status word (PSW). In most computers, this information, plus general registers must be saved when switching machine states. It can be referenced to reinitiate action at later time on the program interrupted. Branch and Jump Instructions Two kinds of instruction permit change in program sequence: conditional and unconditional. The purpose of such instructions is to permit the system to make some decision, so as to alter the flow of program, and to continue execution at some point not in the original sequence. In nonconditional branches the original program instruction provides for a branch or jump whenever the particular instruction occurs.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.38

COMPUTER ORGANIZATION 18.38

DIGITAL COMPUTER SYSTEMS

Conditional branches take the extra step of determining if some condition to be tested has been satisfied. Either the op code or the operand 1 field normally defines the test and/or the condition to be tested for. If the specified test has been satisfied, the branch is executed as described above. Otherwise no action is taken, and the next normally sequenced instruction is executed.

Advanced Architectural Features The basic structure of the digital computer and its operation have been described in the previous paragraphs. This structure has proved to be flexible and adaptable to the solution of many different applications. There is, however, a continuing need to increase the performance of the computer and to make it easier to program so that even more applications can be handled. This has been achieved through a number of different approaches. One has been to develop sophisticated operating systems and to couple them closely to the hardware design of the computer. The net effect is that a programmer viewing the computer does not distinguish between the hardware and the operating system but perceives them as an integrated whole. A second solution has been the development of architectural features that permits overlapped processing operations within the computer. This is in contrast to earlier computers that were strictly sequential and resulted in reduced performance and low throughput because vulnerable computer resources could remain idle for relatively long periods of time. The newer architectural features include such concepts as data channels, storageorganization enhancements, and pipelining described briefly in the following paragraphs. Another totally different approach to improved computer performance has been the development of computers with non-von Neumann architecture, described in the next section. An increase in computer performance can be achieved by overlapping input/output (I/O) operations and processor operations. Channels have been introduced in large systems to permit concurrent reading, writing, and computing. The channel is in effect a special processor that acts as a data and control buffer between the computer and its peripheral devices. Figure 18.1.37 shows the organization of the computer when channels are introduced. Each channel can accommodate one or more I/O device control units. The channel is designed with a standard interface that is designed to permit a standard set of control status signals and data signals and sequences to be used to control I/O devices. This permits a number of different I/O devices to be attached to each channel by using I/O device control units that also meet the standard interface requirements. Each device control unit is usually designed to function with only one I/O device type, but one control unit may control several devices. The channel functions independently of the CPU. It has its own program that controls its operations. The CPU controls channel activity by executing I/O instructions. These cause it to send control information to the channel and to initiate its operation. The channel then functions independently by being given a channel program to execute. This program contains the commands to be executed by the channel as well as the addresses of storage locations to be used in the transfer of data between main storage and the I/O devices. The channel in turn issues orders to the device control unit, which in turn controls the selected I/O device. When the I/O operation is completed, the channel interrupts the CPU by sending it a signal indicating that the channel is

FIGURE 18.1.37 Computer organization with channels, separate logical processors that permit simultaneous input, output, and processing.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.39

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.39

again free to perform further I/O operations. Several channels can be attached to the CPU and can operate concurrently. In PCs, I/O operation is controlled by adapter cards that fit into the motherboard.

Cache Storage Cache storage was introduced to achieve a significant increase in the performance of the CPU at only a modest increase in cost. It is, in effect, a very high-speed storage unit that is added to the computer but is designed to operate in a unique way with the main storage unit. It is transparent to the program at the instruction level and can thus be added to a computer design without changing the instruction set or requiring modification to existing programs. Cache storage was first introduced commercially on the IBM System/360 model 85 in 1968.

Virtual Storage Properly using and managing the memory resources available in the computer has been a continuing problem. The programmer never seems to have enough high-speed main storage and has been forced to use fairly elaborate procedures such as overlays to make programs fit into main storage and run efficiently. Virtual-storage systems were introduced to permit the programmer to think of memory as one uniform single-level storage unit but to provide a dynamic address-translation unit that automatically moves program blocks on pages between auxiliary storage and high-speed storage on demand.

Pipelining A further improvement in computer performance was achieved through the use of pipelining. This technique consists of decomposing repetitive processes within the computer into subprocesses that can be executed concurrently and in an overlapped fashion. Instruction execution in the control unit of the CPU lends itself to pipelining. As discussed earlier, instruction execution consists of several steps that can be executed relatively independently of each other. There is instruction fetch, decoding fetching the operands, and then execution of the instruction. Separate units can be designed to perform each one of these steps. As each unit finishes its activity on an instruction, it passes it on to the next succeeding unit and begins to work on the next instruction in succession. Even though each instruction takes as long to execute overall as it does in a conventional design, the net effect of pipelining is to increase the overall performance of the computer. For example, under optimal conditions, once the pipeline is full, when one instruction finishes, the next instruction is only one unit behind it in the pipeline. In this four-unit example, the net effect would be to increase the speed of instruction execution by a factor of 4. This approach can be carried to the point where 20 or 30 instructions are at various stages of execution at one time. This type of processing is called pipeline processing. No difficulties arise during uninterrupted processing. When an interrupt does occur, however, it is difficult to determine which instruction has caused the interrupt since the interrupt may arise in a subsystem sometime after the IC has initiated action. In the meantime, the IC may have started a number of subtasks elsewhere by stepping through subsequent cycles. Operands within subunits may not be saved in an arbitrary intermediate state, since information is in the process of being generated for return to the main program. Because of the requirement that no further I cycles be started, interrupt is signaled when the pipeline is empty. At the time the interrupt is signaled, the IC does not point at the instruction causing the interrupt but somewhat past it. This type of interrupt is called imprecise.

Advanced Organizations In addition to the von Neumann computer and its enhancements, a number of other computer organizations have been developed to provide alternative approaches to satisfying the needs for improved computational performance, increased throughput, and improved system reliability and availability. In addition, certain unique architectures have been proposed to solve specific problems or classes of problems.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.40

COMPUTER ORGANIZATION 18.40

DIGITAL COMPUTER SYSTEMS

TABLE 18.1.3

Classification Scheme for Computer Architectures

Acronym

Meaning

SISD

Single instruction stream, single data stream Single instruction stream, multiple data stream Multiple instruction stream, single data stream Multiple instruction stream, multiple data stream

SIMD MISD MIMD

Instruction streams

Data streams

1

1

1

>1

>1

1

>1

>1

Examples IBM 370, DEC VAX, Macintosh ILLIAC IV, Connection Machine, NASA’s MPP Not used Cray X/MP, Cedar, Butterfly

Computer organizations can be categorized according to the number of procedure (instruction) streams and the number of data streams processed: (1) a single-instruction-stream-single-data-stream (SISD) organization, which is the conventional computer; (2) a multiple-instruction-stream-multiple-data-stream (MIMD) organization, which includes multiprocessor or multicomputer systems; (3) a single-instruction-stream-multiple-datastream (SIMD) organization; this uses a single control unit that executes a single instruction at a time, but the operation is applied across a number of processing units each of which acts in a synchronous, concurrent fashion on its own data set (parallel and associative processors fall into this category); and (4) a multiple-instructionstream-single-data-stream (MISD) organization. (Pipeline processors fall into this category.) Table 18.1.3 summarizes the categories. One way to achieve an improvement in performance and to improve reliability at the same time is to use multiprocessors. The American National Standard Vocabulary for Information Processing defines a multiprocessor as “a computer employing two or more processing units under integrated control.” Enslow61a amplifies this definition by pointing out that a multiprocessor contains two or more processors of approximately comparable capabilities. Furthermore, all processors share access to common storage, to I/O channels, control units, and devices. Finally, the entire system is controlled by one operating system that provides interaction between processors and their programs. A number of different multiprocessor system organizations have been developed: (1) time-shared, or commonbus, systems that use a communication path to connect all functional units, (2) crossbar-switch systems that use a crossbar switching matrix to interconnect various system elements, (3) multiport storage systems in which the switching and control logic is concentrated at the interface to the memory units, and (4) networking of computers interconnected via high-speed buses. A number of large problems require high throughput rates on structured data, e.g., weather forecasting, nuclear-reactor calculations, pattern recognition, and ballistic-missile defense. Problems like these require high computation rates and may not be solved cost-effectively using a general-purpose (SISD) computer. Parallel processors, which are SIMD organization types, were designed to address problems of this nature. A parallel processor consists of a series of process elements (cells) each having data memories and operand registers. The cells are interconnected. A central control unit accesses a program, interprets each program step, and broadcasts the same instructions to all the processing elements simultaneously.

Distributed Processing One of the newest concepts in computer organizations is that of distributed processing. The term distributed processing has been loosely applied to any computer system that has any degree of decentralization. The consensus seems to be that a distributed processing system consists of a number of processing elements (not necessarily identical), which are interconnected but which operate with a distributed, i.e., decentralized, control of all resources. With the advent of the less expensive micro- and miniprocessors, distributed processing is receiving much attention since it offers the potential for organizing these processors so that they can handle problems that would otherwise require more expensive supercomputers. Through resource sharing and decentralized

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.41

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.41

control, distributed processing also provides for reliability and extensibility since processors can be removed or added to the system without disrupting system operations. The types of distributed processing can be described by configurations in terms of the processing elements, paths, and switching elements: (1) loop, (2) complete interconnection, (3) central memory, (4) global bus, (5) star, (6) loop with central switch, (7) bus with central switch, (8) regular network, (9) irregular network, and (10) bus window. Distributed systems have been designed that fall into each of these categories. Hybrid forms use combinations of two or more of these architectural types. Distributed and parallel processing systems can be thought of as being loosely coupled (each has its own CPU and memory) or tightly coupled (each has its own CPU and shares the same memory). Master–slave and client–server systems are part of the distributed and cooperative processing. In master–slave, one computer controls another, whereas in client–server, the requesting computer is the client and the requested resource is the server. Stack Computers In the CPUs discussed thus far, the instructions store all results, so that the next time an operand is used it has to be fetched. For example, a program to add A, B, and C and put result into E appears as MOVE A to E ADD E to B store in E ADD E to C store in E

1 fetch, 1 store 2 fetch, 1 store 2 fetch, 1 store

In languages such as PL/I or FORTRAN this program might be written as the single statement E=A+B+C This equation describes a sequence of actions, as in the case of the program, but the specific sequence is not described. Since addition is commutative, a correct result is achieved by E = [(A + B) + C], E = [A + (B + C)], or E = [(A + C) + B], each step occurring in any order. The computer, however, uses a specific program in achieving a result so that the method of writing the equation must generate a specific sequence of actions. A method of writing an equation that specifies the order of operation is called Polish notation. For the above example of addition, one possible Polish string would be AB + C + E = In this string, the system would find A and B and, as determined by the plus sign following the two operands, add them. The result is then combined with C under addition called for by the second plus sign. The E = symbols indicates that the result is to be stored in E. The plus sign appears after the A and B, and the specific string shown is called postfixed. An equivalent convention could place the operator first and would be called prefixed. Any complex expression can be translated into a Polish string. For example in PL/I language, the statement M = (A + B)*(C + D*E) – F; means evaluate the right-hand side of the equation and store the result in the main-store location corresponding to variable M (asterisks indicate multiplication). The Polish string translation for this statement is AB + DE*C + *F – M = In translation from the types of expression permitted by higher-level languages, a machine can be programmed to analyze successively an arithmetic expression of the types shown above. In so doing, first, the outermost expressed or implied parentheses are aggregated and successively broken down until not more quantities remain. The first such quantities analyzed are generally the last computed, so that in the development of a Polish string from an algebraic expression, a first-in, last-out situation prevails.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.42

COMPUTER ORGANIZATION 18.42

DIGITAL COMPUTER SYSTEMS

Stacks Evaluation of a Polish string in a machine is best performed using a stack (push-down list). A stack has the property that it holds numbers in order. A PUSH command places a value on the stack; i.e., it stores a number and an operation at the top of the stack and, in the process, lowers all previous items by one position. Numbers are retrieved from the stack by issuing a POP command. The number returned by the stack on a POP command is the most recently PUSHED one. The following example illustrates the behavior of a stack. The value in parentheses is the value placed on the stack for PUSH and returned by the stack for POP (assume the stack is initially empty): PUSH (A) PUSH (B) POP (B) PUSH (C) POP (C) POP (A)

stack contains stack contains stack contains stack contains stack contains stack contains

A BA A CA A nothing

Such a stack lends itself very well to the evaluation of Polish strings. The rules for evaluation are: 1. 2. 3. 4.

Scan the string from left to right. If a variable (or constant) is encountered, fetch it from main store and place its value on the stack. If an operator is encountered, POP the operands and PUSH the result. Stop at the end of the string. If executed correctly, the stack is in the same state at the end of execution as it was at the start.

The advantage of using a stack is that intermediate results never need storing and therefore no intermediate variables are needed. In sequences of instructions where there are no branches, the operations can be stored in a stack. A program becomes a series of such stacks put together between branches. This approach is called stack processing. In stack processing, a program consists of many Polish strings that are executed one after another. In some cases the entire program may be considered to be one long string. Stacks are implemented by using a series of parallel shift registers, one per bit of the character code. The input is placed into the leftmost set of register positions. PUSH moves an entry to the right, and POP moves it to the left. The length of the shift registers is finite and fixed. The stack, however, usually must appear to the user as though it were infinitely deep. The stack is thus implemented so that the most active locations are in the shift register, and if the shift register overflows, the number shifted out at the right on a PUSH is placed in main storage. There the order is maintained by hardware, microprogramming, or a normal system program. Trends in Computer Organization Several trends in computer architecture and organization may have a significant impact on future computer systems. The first of these are the data-flow computers, which are data-driven rather than control-driven. In the data-flow computer, an instruction is ready for execution when its operands have arrived; there is no concept of control flow, and there is no program counter. A data-flow program can feature concurrent processing since many instructions can be ready for execution at the same time. In another area capability systems are receiving increased attention because their inherent protection facilities make them ideal for implementing secure operating systems. A capability is a protected token (key) authorizing the use of the object named in the token. One approach to implementing capabilities is through a tagged architecture, where tag bits are added to each word in storage and to each register. This tag specifies whether the contents represent a capability or not. Special-Purpose Processors Certain classes of problems require unique processing capabilities not found in general-purpose computers. Special-purpose processors have been designed to solve these problems. In some cases they have been designed

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.43

COMPUTER ORGANIZATION COMPUTER ORGANIZATION

18.43

as stand-alone processors, but often they are designed to be attached to a general-purpose computer that acts as the host. One such class of special-purpose processors is the associative processor. It uses an associative store. Unlike the storage units described earlier, which require explicit addresses, an associative store retrieves data from memory locations based upon their content. The associative store does its searching in parallel over its entire storage in approximately the same time as required to access a single word in a conventional storage unit. In digital signal processing (DSP), many repetitive mathematical operations must be performed, e.g., fast Fourier transforms, and require a large number of multiplications and summations. DSP algorithms are being used extensively in graphics processing and modem transmission. The array processor has been designed for these types of operations. It has a high-speed arithmetic processor and its own control unit and can operate on a number of operands in parallel. It attaches to a host CPU, from which it receives its initiation commands and data and to which it returns the finished results of its computation. The hybrid processor uses a host digital CPU to which is attached an analog computer. These systems operate in a digital-analog mode and provide the advantages of both methods of computation. Hybrid processors are being replaced by very high-speed digital processors.

HARDWARE DESIGN Design and Packaging of Computer Systems Even a small computer may have as many as 3 million integrated circuits on a single chip, whereas a large system may have up to 100 million by the year 2000. Nanosecond circuit speeds are common in high-performance computer systems. Since light travels approximately 1/3 m in 1 ns, system configurations must be kept small to take advantage of the speed potential of available circuits. Thus the emphasis is on the use of microcircuit fabrication and packaging techniques. The layout of the system must minimize the length and complexity of these interconnections and must be realized, without error, from detailed manufacturing instructions. To permit these requirements to be met a basic circuit package must be available. The upper limit to the size of such a basic unit, e.g., an integrated circuit, is set by the number of crystal defects per unit area of silicon. If these defects are distributed at random, the selection of too large a chip size results in some inoperative circuits on a majority of chips. There is thus an economic balance between the number of circuits that can be fabricated in an integrated circuit and the yield of the manufacturing process. Another limit on the size of the basic package is set by the number of interconnections between the integrated circuits. Reduced VLSI-chip power requirements and shrinking device dimensions have increased the number of circuits on a chip.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.44

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 18.2

COMPUTER STORAGE

BASIC CONCEPTS The main memory attached to a processor represents the most crucial resource of the computing system. In most instances, the storage system determines the bulk of any general-purpose computer architecture. Once the word size and instruction set have been specified, the rest of computer architecture and design deals mainly with optimization of the attached memory and storage hierarchy. Early computers were designed by first choosing the main memory hardware. This not only specified much of the remaining architecture, e.g., serial or parallel machine, but also dictated the processor cycle time, which was chosen equal to the memory cycle time. As computer technology evolved, the logic circuits and hence processor cycle time* improved dramatically. This improved speed demanded more main-memory capacity to keep the processor busy, but the need for increasingly larger capacity at higher speed placed a difficult and usually impossible demand on memory technology. In the early 1960s, a gap appeared between the main-memory and processor cycle times, and the gap grew with time. Fundamentally, it is desirable that main-memory cycle time be approximately equal to the processor cycle time, so this gap could not continually widen without serious consequences. In the late 1960s, this gap became intolerable and was bridged by the “cache” concept, introduced by IBM in the System/36 Model 85. The cache concept proved to be so useful and important that by the late 1970s it became quite common in small, medium, and large machine architectures. The cache is a relatively small high-speed random-access memory that is paged out of main memory and holds the most recently and frequently used instructions and data. The same fundamental concepts that provide the basis for cache design apply equally well to “virtual memory” systems; only the methods of implementation are different. In terms of implementation methods, there are five types of storage systems used in computers. 1. Random-access memory is one for which any location (word, bit, byte, record) of relatively small size has a unique, physically wired-in addressing mechanism and is retrieved in one memory-cycle time interval. The time to retrieve from any given location is made to be the same for all locations. 2. Direct-access storage is a system for which any location (word, record, and so on) is not physically wired in and addressing is accomplished by a combination of direct access to reach a general vicinity plus sequential searching, counting, or waiting to find the final location. Access time depends on the physical location of the record at any given time; thus access time can vary considerably, from record to record and to a given record when accessed at a different time. Since addressing is not wired in, the storage medium must contain a certain amount of information to assist in the location of the desired data. This is referred to as stored addressing information. 3. Sequential-access storage designates a system for which the stored words or records do not have a unique address and are stored and retrieved entirely sequentially. Stored addressing information in the form of *Processor

cycle time is roughly 10 logic-gate delays with appropriate circuit and package loads.

18.44 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.45

COMPUTER STORAGE COMPUTER STORAGE

18.45

simple interrecord gaps is used to separate records and assist in retrieval. Access time varies with the record being accessed, as with direct access, but sequentially accessing may require a search of every record in the storage medium before the correct one is located. 4. Associative (content-addressable) memory is a random-access type of memory that in addition to having a conventional wired-in addressing mechanism also has wired-in logic that makes possible a comparison of desired bit locations for a specified match for all words simultaneously during one memory-cycle time. Thus, the specific address of a desired word need not be known since a portion of its contents can be used to access the word. All words that match the specified bit locations are flagged and can then be addressed on subsequent memory cycles. 5. Read-only memory (ROM) is a memory that has permanently stored information programmed during the manufacturing process and can only be read and never destroyed. There are several variations of ROM. Postable or programmable ROM (PROM) is one for which the stored information need not be written in during the manufacturing process but can be written at any time, even while the system is in use, i.e., it can be posted at any time. However, once written, the medium cannot be erased and rewritten. Another variation is a fast-read, slow-write memory for which writing is an order of magnitude slower than reading. In one such case, the writing is done much as in random-access memory but very slowly to permit use of low-cost devices. Several types of ROM chips are available. The erasable programmable ROM (EPROM) allows the use of ultraviolet light (UV) to erase its contents. This UV-EPROM can take up to 20 min to erase its contents; a ROM burner may then be used to implant new data. An electrically erasable programmable ROM (EEPROM) allows the use of electrical energy to clear its memory. The flash memory EPROM is a high-speed erasable EPROM. The various computer-storage types are related to each other through access time. An approximate rule of thumb for access time comparisons is as follows (where T = access time): Tc = 10 –1Tm = 10 –6Td = 10 –9Tt where Tc = cache time Tm = main time Td = disc time Tt = tape time

STORAGE-SYSTEM PARAMETERS In any storage system the most important parameters are the capacity of a given module, the access time to any piece of stored information, the data rate at which the stored information can be read out (once found), the cycle time (how frequently the system can be accessed for new information), and the cost to implement all these functions. Capacity is simply the maximum number of bits (b), bytes (B), or words that can be assembled in one basic self-contained operating module. Access time can vary depending on the type of storage. For random-access memory the access time is the time from the instant a request appears in an address register until the desired information appears in an output register, where it can subsequently be further processed. For nonrandom-access storage, the access time is the time from the instant an instruction is decoded asking for information until the desired information is found but not read. Thus, access time is a different quantity for random and nonrandom-access storage. In fact, it is the access time that distinguishes the two, as is evident by the definitions above. Access time is made constant on random-access memory whereas on nonrandom storage access time may vary substantially, depending on the location of information being sought and the current position of the storage system relative to that information. Data rate is the rate (usually bits per second, bytes per second, or words per second) at which data can be read out of a storage device. Data transfer time for reading or writing equals the product of the data rate and the quantity of the information being transferred. Data rate is usually associated with nonrandom-access storage where large pieces of information are stored and read serially. Since an entire word is then read out of randomaccess memory in parallel, data rate has no significance for such memories. Cycle time is the rate at which a memory can be accessed, i.e., the number of accesses per unit time, and is applicable primarily to random-access storage.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.46

COMPUTER STORAGE 18.46

DIGITAL COMPUTER SYSTEMS

FUNDAMENTAL SYSTEM REQUIREMENTS FOR STORAGE AND RETRIEVAL In order to be able to store and subsequently find and retrieve information, a storage system must have the following four basic requirements: Medium for storing energy Energy source for writing the information, i.e., write transducers on word and bit lines Energy sources and sensors to read, i.e., read and sense transducers Information addressing capability, i.e., address-selection mechanism for reading and writing The fourth requirement implicitly includes some coincidence mechanisms within the system to bring the necessary energy to the proper position on the medium for writing and a coincidence mechanism for associating the sensed information with the proper location during reading. In random-access memory, it is provided by the coincidence of electric pulses within the storage cell, whereas in nonrandom-access storage, it is commonly provided by the coincidence of an electric signal with mechanical position. In many cases, the write energy source serves as the read energy source as well, thus leaving only sense transducers for the third requirement. Nevertheless, a read energy source is still a basic requirement. The differences between storage systems lie only in how these four requirements are implemented and, more specifically, in the number of transducers required to achieve these necessary functions. Here a transducer denotes any device (such as magnetic head, laser, transistor circuits) that generates the necessary energies for reading and writing, senses restored energy and generates a sense signal, or provides the decoding for address selection. Since random-access memory is so crucial to processor architecture, we discuss this type first and continue in the order defined above. The organization of random-access memory systems is intimately dependent on the number of functional terminals inherent in the memory cells. The cell and overall array organization are so interwoven that a discussion of one is difficult without the other. We discuss first the fundamental building block, the memory cell, and subsequently build the array structure from this.

RANDOM-ACCESS MEMORY CELLS To be useful in random-access memory, the cell must have at least two independent functional terminals consisting of a word line and a common bit/sense line, as shown in Fig 18.2.1a. For writing into the cell, the coincidence of a pulse on the word-select line with the desired data on the bit-select line places the cell into one of two stable binary states. For reading, only a pulse on the word-select line is applied, and a sense signal indicating the binary state of the cell is obtained on the sense line. A more versatile but more complex cell is one having three functional terminals, as in Fig. 18.2.1b. A coincidence of pulses on both the x and y lines is necessary to select the cell for both reading and writing. The appearance of data on the bit/sense line puts the cell in the proper binary state. If no data are applied, a sense signal indicating the binary state of the cell is obtained. The use of coincidence for reading as well as writing allows this cell to be used in complex array organizations. Magnetic ferrite cores, which played a key role in the early history of computers, were first introduced as three-terminal cells. In fact, the first ferrite core cells used four wires through each core (separate bit and sense lines) to achieve a three-functional terminal cell as in Fig. 18.2.1b. Refinements gradually allowed the bit and sense line to be physically combined to give a three-wire cell and eventually even a two-wire two-terminal cell. The introduction of integrated-circuit cells in the early 1970s for random-access memory led to the disuse of ferrite cores. There are two general types of integrated-circuit cells, static and dynamic. Static cells remain in the state they are set until they are reset or the power is removed and are read nondestructively; i.e., the information is not lost when read. A dynamic cell gradually loses its stored information and must be refreshed periodically. In addition, the cell state is destroyed when read, and the data must be rewritten after each read cycle. However, dynamic cells can be made much smaller than static cells, which gives a greater density of bits per semiconductor chip. The resulting lower cost more than compensates for the additional complexity.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.47

COMPUTER STORAGE COMPUTER STORAGE

18.47

FIGURE 18.2.1 Reading and writing operations for random-access memory cells having two and three functional terminals: (a) cell with two functional terminals, and (b) cell with three functional terminals.

STATIC CELLS All static cells use the basic principles of the Eccles-Jordan flip-flop circuit to store binary information in a cross-coupled transistor circuit like that shown in Fig. 18.2.2. Either junction (bipolar) or field-effect transistors (FET) can be used in the same configuration; only the voltages, currents, and load resistors will be different. To store a 0, transistor T0 is turned on and T1 turned off. A stored 1 is just the opposite condition, T0 off and T1 on. To achieve these states, it is necessary to control the node voltages at A and B. For instance, if node A voltage VA is made sufficiently low, the base of T1 will be low enough to turn T1 off, causing node B to rise in voltage. This turns the base and hence the collector of T0 on and holds node A at a low voltage so that a 0 is stored in the flip-flop. If the voltage at node B is made sufficiently low, the opposite state occurs, giving a stored 1. For reading, it is only necessary to sample the voltages at nodes A or B to see which is high or low. This gives a nondestructive read cell since the state is not changed, only sampled. Hence, nodes A and B are the access ports to the cell for both reading and writing. The basic difference between all static cells is in how nodes A and B are accessed. Note that although the flip-flop of Fig. 18.2.2 has two access nodes A and B, it is functionally only a one-terminal cell since nodes

FIGURE 18.2.2 Transistor flip-flop circuits: (a) stored 0, T0 conducts, VA = 0, VB = 1 V; (b) stored 1, T1 conducts, VA = 1 V, VB = 0.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.48

COMPUTER STORAGE 18.48

DIGITAL COMPUTER SYSTEMS

FIGURE 18.2.3 MOSFET two-terminal storage cell.

A and B are not independent but operate in a cooperative mode. To make a cell suitable for a random-access array, at least another functional terminal must be added. This can be achieved by the addition of another FET to each node, A and B, as shown in Fig. 18.2.3. Although the two transistors provide a total of four physical connections to the cell, that is, T2 provides one gate g2 and one drain d2 for node A and T3 provides g3 and d3 for node B, the circuit operates in a symmetrical, balanced mode. Since these four terminals are not independent, only two functional terminals are present. The operation of the cell can be understood by tracing through the pulsing sequence of Fig. 18.2.3. The peripheral circuits used to obtain these pulse sequences are not shown. An equivalent two-terminal static cell can be achieved by using junction transistors with some charges, but the operating principles remain basically the same. A two-terminal static cell that was very popular in early integrated-circuit memories used multiemitter transistors. The cell required high power, however, a condition that greatly limits chip density. Higher density and lower power can be obtained with the Schottky diode flip-flop cell of Fig. 18.2.4, a very popular cell for high-speed, low-power junction transistor memories. Resistors Rc are part of the internal collector contact resistance of the devices and are much smaller than RL. The pulses on the word and bit/sense lines are used to forward- or back-bias the diodes D0 and D1, thus controlling or sensing the voltages and nodes A and B, as can be seen by tracing through the pulse sequence shown.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.49

COMPUTER STORAGE COMPUTER STORAGE

18.49

FIGURE 18.2.4 Schottky diode storage cell.

DYNAMIC CELLS Since the static cells described above operate in a balanced, differential mode, it would seem reasonable that a nondifferential mode would be able to achieve at least a twofold reduction in component count per cell. This is indeed the case, and by sacrificing other properties of static cells, such as their inherent nondestructive read capability, a very significant reduction in cell size is possible. The most widely used such cell is the one-FET dynamic cell shown in a nonintegrated form in Fig. 18.2.5. The essential principle is simply the storing of charge on capacitor Cs for a 1 and no charge for a 0. A single capacitor by itself is sufficient to accomplish this, but an array of such devices requires a means of selecting the desired capacitors for reading and writing and of isolating nonselected capacitors from the array lines. If isolation is not provided, the charge stored on halfselected capacitors may inadvertently be removed, making the scheme inoperable. The isolation and selection means is provided by the simple one-FET device in series with the capacitor, as shown. For operation in an array, the terminal c can be either at ground or + Vc, depending on the technology and details of the cell. Assume that c is at + Vc, as indicated. To write either state, the word line is pulsed high to turn the FET on. If the bit/sense line is at ground, Vc will charge Cs to a stored 1 state. However, if the bit/sense line is at + Vc, any charge on Cs will be removed or no charge is stored if none was there originally, giving a stored 0. For reading, the word line is pulsed high to turn the FET on with the sense line at a normally high voltage. If there was

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.50

COMPUTER STORAGE 18.50

DIGITAL COMPUTER SYSTEMS

FIGURE 18.2.5 One-FET-device dynamic storage cell: (a) general equivalent circuit; (b) pulsing sequence.

charge on the capacitor, it will discharge through the bit/sense line to give a signal as shown. If there was no stored charge, no signal would be obtained. Note that the reading is destructive since a stored 1 is discharged to a 0 and requires regeneration after each read operation in an array. Note also that the FET must carry current in both directions; i.e., the current charging Cs during writing is in the opposite direction of the current during reading. This cell has one further disadvantage: in an integrated structure, the charge on Cs will unavoidably leak off in a time typically measured in milliseconds. Hence the cell requires periodic refreshing, which necessitates additional peripheral circuits and complications. This feature gives rise to the term dynamic cell. Despite these disadvantages, this technique allows a very substantial improvement in cell density and cost, at very adequate cycle times and has become very popular.

RANDOM-ACCESS MEMORY ORGANIZATION A memory is organized into bytes or byte groups known as words. If N is a word of fixed length, each word is assigned an address, or location, in memory. Each word has the same number of bits, called the word length. Each access usually retrieves the entire word. The addresses in memory run consecutively, from address 0 to the largest address.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.51

COMPUTER STORAGE COMPUTER STORAGE

18.51

In any memory system, if E units are to be selected, an address length of N bits is required which satisfies 2N = E In most cases of interest E is equal to W, the number of logical words in the system.

DIGITAL MAGNETIC RECORDING Magnetic recording is attractive for data processing since it is inexpensive, easily transportable, unaffected by normal environments, and can be reused many times with no processing or developing steps. The essential parts of a simplified but complete magnetic recording system are shown in Fig. 18.2.6. They consist of a controller to perform all the logic functions as well as write-current generation and signal detection; serial-to-parallel conversion registers; a read-write head with an air gap to provide a magnetic field for writing and sensing the flux during reading; and finally the medium. The wired-in cells, array, and transducers of random-access memory have been replaced by one read-write transducer, which is shared by all stored bits, and a shared controller. Coincident selection is still required for reading and writing, and this is obtained by the coincidence of electric signals in the read-write head with physical positioning of the medium under the head.

FIGURE 18.2.6 Schematic of simplified complete magnetic recording system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.52

COMPUTER STORAGE 18.52

DIGITAL COMPUTER SYSTEMS

The recording medium is very similar in principle to ferrite material used in cores. The common material for digital magnetic recording is ferric oxide, Fe2O3. It has remained essentially unchanged for many years except for reductions in the particle size, smoother surfaces, and thinner, more uniform coating, all necessary for high density. This material remained the sole medium for discs and tapes until the late 1970s when NiCo (nickel cobalt) was introduced. These new materials have a higher coercive force and can be deposited thinner and smoother, allowing higher recording densities. Operation of these media requires a reasonably rectangular magnetic hysteresis loop with two stable residual states + Br and – Br for storing binary information. The media must be capable of being switched an infinite number of times by a magnetic field produced by the write head, which exceeds the coercive force. Stored information is sensed by moving the magnetized bits at constant velocity under the read head to provide time-changing flux and hence an induced sense signal. The essence of magnetic recording consists of being able to write very small binary bits, to place these bits as close together as possible, to obtain an unambiguous read-back voltage from these bits, and to convert this continuously varying voltage into discrete binary signals. The writing is done by the trailing edge of the write field. The minimum size of one stored bit is determined by the minimum transition length required within the medium to change from +Br to –Br without self-demagnetizing. The smaller the transition length, the larger the self-demagnetizing field. The minimum spacing at which adjacent bits can now be placed with respect to a given bit is governed mainly by the distortion of the sense signal when adjacent bits are too close, referred to as bit crowding. This results from the overlapping of the fringe field from adjacent bits when they are too close and this total, overlapped magnetic field is picked up in the read head as a different induced signal from that produced by a single transition. Conversion of the analog read-back signal to digital form requires accurate clocking; this means that clocking information must be built into the coded information, particularly at higher densities. Neglecting clocking and analog-to-digital conversion problems for the moment, the signals obtained during a read cycle are just a continuous series of 1s and 0s. A precise means of identifying the exact beginning and end of the desired string of data is necessary, and furthermore, some means for identifying specific parts within the data string is often desirable. Since the only way to recognize particular pieces of stored information is through the sequence of pulse patterns, special sequences of patterns such as gaps, address markers, and numerous other coded patterns are inserted into the data. These can be recognized by the logic hardware built into the controller, which is a special-purpose computer attached to the storage unit. These special recorded patterns along with other types of aids are referred to as the stored addressing information and constitute at least a part of the addressing mechanism. Coding schemes are chosen primarily to increase the linear bit density. The particular coding scheme used determines the frequency content of the write currents and read-back signals. Different codes place different requirements on the mode of operation and frequency response of various parts of the system such as clocking technique, timing accuracy, head time constant, medium response, and others. Each of these can influence the recording density in different ways, but in the overall design the trade-offs are made in the direction of higher density. Thus special coding schemes are not fundamentally necessary but only meet practical needs. For instance, it is possible to store bits by magnetizing the medium over a given region where say +Mr (magnetization) is a stored 1 and – Mr is a stored 0, as in Fig. 18.2.7a. The transition region in between 1s and 0s is assumed to have M = 0 except for the small regions of north and south poles on the edges as shown. As the medium is moved past the read head, a signal proportional to dM/dt or dM/dx is induced, so that the north poles induce, say, a positive signal and south poles a negative signal as shown (polarity arbitrary). This code is known as return to zero (RZ) since the magnetization returns to zero after each bit. Each bit has one north- and one south-pole region, so that two pulses per bit result. Not only are two pulses per bit redundant, but considerable space is wasted on the medium for regions separating stored bits. It is possible to push these bits closer together, as in Fig. 18.2.7b, so that the magnetization does not return to 0 when two successive bits are identical, as shown for the first two 1s; hence the name nonreturn to zero (NRZ). The result is then only one transition region and one signal pulse per bit. By adjusting the clocking pulses to coincide with the signal peaks, we have a coding scheme in which 0s are always negative signals and 1s are always positive (Fig. 18.2.7b). The difficulty is that only a change from 1 to 0 or 0 to 1 produces a pulse; a string of only 1s or 0s produces no signals. This requires considerable logic and accurate clocking in the controller to avoid accumulated errors as well as to separate 1s from 0s.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.53

COMPUTER STORAGE COMPUTER STORAGE

18.53

FIGURE 18.2.7 Cross-sectional view of magnetic recording medium showing stored bits, signal patterns, and clocking for various codes.

One popular coding scheme is a slightly revised version of the above, namely, nonreturn to zero inverted (NRZI), in which all 1s are recorded as a transition (signal pulse) and all 0s as no transition, as shown in Fig. 18.2.7c. There is no ambiguity between 1s and 0s, but again a string of 0s produces no pulses. A double-clock scheme, with two clocks, each triggered by the peaks of alternate signals, is used to set the clock timing period to the following strobe point. NRZI is a common coding scheme for magnetic tapes used at medium density.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:11 AM

Page 18.54

COMPUTER STORAGE 18.54

DIGITAL COMPUTER SYSTEMS

FIGURE 18.2.7 (Continued )

For greater than density 1600 b/in. the clocking and sensing become critical, and phase encoding (PE), shown in Fig. 18.2.7d, is often used. Since 1s give a positive signal and 0s give a negative signal, a signal is available for every bit. Phase encoding requires additional transitions within the medium, e.g., between successive 1s or successive 0s, as shown. Density, however, is usually limited by sensing, clocking, and other problems rather than by the medium capability. For magnetic-disc recording, a double-frequency NRZI code is often used. This is obtained from NRZI by adding an additional transition just before each stored bit. The additional transition generates an additional pulse to serve as a clocking pulse. Hence a well-specified window is provided between bits to avoid clocking problems when a string of 0s is encountered. Other popular and useful codes are frequency modulation (FM), modified frequency-modulation (MFM), which is derived from the DF code, and the run-length-limited code. Since the latter does not maintain a distinction between data and clock transitions, a special algorithm is required to retrieve the data. FM encodes 1 and 0 in different frequencies, but entails a minimum of one pulse per digit for both. A pulse always appears at the beginning of a clock cycle. If the data are 0s, there are no further pulses; if the data are 1s, there is an additional pulse. MFM eliminates the automatic pulse for each digit, and so is more efficient. MFM retains the pulse for 1; for encoding 0, there is a pulse at the beginning of a period unless it was preceded by a 0. The encoding is illustrated in Fig. 18.2.7e. RLL is even more efficient. The run length is the number of no-flux transitions between two consecutive transitions. Figure 18.2.7e shows the encoding for 11000101. FM has 12 flux transitions, while MFM requires only six transitions. In personal computers, MFM is used to achieve double density recording, whereas FM is used for single density. Writing the transition (north or south) in the medium is done by the trailing edge of the fringe field produced by the write head. Since writing is rather straightforward and is not a limiting factor, we dwell here on the more difficult problems of reading and clocking. Read-back signals can best be understood in terms of the reciprocity theorem of mutual inductance. This theorem states that for any two coils in a linear medium, the mutual inductance from coil 1 to 2 is the same as that from coil 2 to 1. When applied to magnetic recording, the net result is that the signal, as a function of time observed across the read-head winding induced by a step-function magnetization transition, has a shape that is identical to the curve Hx versus x for the same position of the medium below the head gap (Fig. 18.2.6). It is necessary only to replace x by vt, where v = velocity, for the translation of the x scale on Hx to the time scale on Vsig versus t. The Hx-versus-x curve with a multiplication factor is often referred to as the sensitivity function.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.55

COMPUTER STORAGE COMPUTER STORAGE

18.55

FIGURE 18.2.8 Bit crowding on read-back showing amplitude reduction and peak shift.

The writing of a transition is done by only one small portion of the Hx-versus-x curve, whereas the sense signal is determined by the entire shape of Hx versus x; that is, the signal is spread out. This fact gives rise to bit crowding, which makes the read-back process more detrimental in limiting density than the writing process. To understand bit crowding, suppose there are two step-function transitions of north and south poles separated by some distance L, as in Fig. 18.2.8. When these transitions are far apart, their individual sense signals shown by the dashed lines appear at the read winding. However, as L becomes small, the signals begin to overlap and, in fact, subtract from each other,* giving both a reduction in peak amplitude and a time shift in the peak position as shown. This represents, to a large extent, the actual situation in practice. The transitions can be written closer together than they can be read. Clocking or strobing of the serial data as they come from the head to convert them into digital characters is another fundamental problem. If perfect clock circuits with no drift and hence no accumulated error could be made, the clocking problem would disappear. But all circuits have tolerances, and as the bit density increases, the time between bits becomes comparable to the drift in clock cycle times. Since the drift can be different during reading and writing, serious detection errors can result. For high density, it is necessary to have some clocking information contained within the stored patterns as in the PE and double-frequency NRZI codes discussed previously.

MAGNETIC TAPE The most common use of magnetic tape systems is providing backup for hard disk drives. Since there are either seven or nine tracks written across the width of the tape, either seven or nine readwrite heads are required to store one complete character or byte at a time. The bit spacing along a track is approximately the reciprocal of the linear density, or 0.00125 for an 800 b/in. system. The actual transition lengths are generally about half this bit-cell spacing. In many systems there are separate read and write gaps in tandem to check the reliability of the recording by reading immediately after writing. The tape is mechanically moved back and forth in contact with the heads, all under the direction of a controller. Tape transports in digital systems must be able to start and stop quickly, and achieve high tape speed rapidly. The streaming tape drive used in PCs does not start and stop quickly; it is used to backup disk systems. The stored addressing information in tapes is relatively simple, consisting of specially coded bits and tape marks in addition to interrecord gaps (IRG). The latter are black space on the tape to provide space to accelerate *Linear

superposition is possible since the air gap makes the read head linear.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.56

COMPUTER STORAGE 18.56

DIGITAL COMPUTER SYSTEMS

and decelerate between records since reading and writing can only be done at constant velocity. The common gap sizes are 0.6 and 0.75 in. Typically, a tape recorded at 800 b/in. with eight tracks plus parity storing records of 1K B each and a gap of 0.6 in between each record can hold over 108 b of data. Even though this represents a large capacity, the gap spaces consume nearly 50 percent of the tape surface, a rather extravagant amount. In order to increase efficiency, records are often combined into groups known as blocks. Since the system can stop and start only at an interrecord gap, the entire block is read into main memory for further processing during one read operation. The highest-density tapes can hold nearly eight times the above amount. Half-inch reel tapes are beginning to be replaced by shorter reels mounted in self-contained cartridges. One tape drive can hold multiple cartridges, which can be mechanically selected and mounted. Typical cartridge tapes are 165 in. long and record 18 tracks on 1/2-in. tape at about 25K flux transitions per inch (972 flux transitions/mm).

DIRECT-ACCESS STORAGE SYSTEMS—DISCS The major type of direct-access storage system is a disc. The system’s recording head usually consists of one gap, which is used for both reading and writing. The head is “flown” on an air cushion above the disc surface at separations in the neighborhood of 5 to 100 min., depending on the system. A well-controlled separation is vital to reliable recording. For discs, the medium consists of very thin coatings of about 10 min. or less of the same magnetic material used on tapes but applied to polished aluminum discs. Several discs are usually mounted on one shaft, all rotated in unison. Each surface is serviced by one head. The arms and heads are moved mechanically along a radial line, and each fixed position sweeps out a track on each surface, the entire group of heads sweeping out a cylinder. A typical bit cell is a rectangle 0.005 in. wide by 0.0005 in. long for a 2000 b/in. linear density. The transition length is about half this size. The fundamental difference between various disc systems centers on the stored addressing information and addressing mechanisms built into the system. Some manufacturers provide a rather complex track format permitting the files to be organized in many different ways, using keys, identifying numbers, stored data, or other techniques for finding a particular word. Thus, the user can “program” the tracks and records to retrieve a particular word “on the fly.” This provides a very versatile system but only with additional cost, since considerable function must be built into the controller. Other systems use a very simple track format consisting mainly of gaps and sector marks that do not permit the user to include programmable information about the data. This scheme is more suitable for well-organized data, such as scientific data; it still can be used in other applications with more user involvement. Floppy discs are used in wide application as inexpensive high-density, medium-speed peripherals. Floppy discs often consist of the same flexible medium (hence the name floppy) as magnetic tape but cut in the form of a disc. Such discs straighten out when spun. The read-write mechanism is usually identical to that above except that the head is in contact with the medium, as in tape. This causes wear that is more significant than in ordinary discs, required frequent replacement of the disc and occasional replacement of the head, particularly when heavily used. Floppy discs use tracks in concentric circles with movable heads and track-following servo. The systems record on both sides. The major deviations from flying-head discs are much smaller track density and data rate. The linear bit densities are quite comparable to those of tape. Typical parameters are 77 to 150 tracks per disc surface, 1600 to 6800 b/in., rotation from 90 to 3600 r/min. Optical discs provide more storage than magnetic discs and are commonly in use. One optical disc can contain the same information as 20 or more floppy discs.

VIRTUAL-MEMORY SYSTEMS Virtual memory is a term usually applied to the concept of paying a gain memory out of a disc. This concept makes the normal-sized main memory appear to the user as large as the virtual-address space* while still appearing to run at essentially the speed of the actual memory. Virtual memories are particularly useful *Represented

by a register holding the virtual address.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.57

COMPUTER STORAGE COMPUTER STORAGE

18.57

in multiprogrammed systems. On the average, virtual memory provides for better management of the memory resource with little wasted space because of fragmentation, which can otherwise be quite severe. To understand fragmentation, suppose that a number of programs to be processed require small, medium, and large amounts of memory on a multiprogrammed machine. Suppose further that a small and a large program are both resident in main memory and that the small one has been completed. The operating system attempts to swap into memory a medium-sized program, to be processed, but it cannot fit in the space freed by the small program. If none of the other waiting programs is small enough to fit, this memory space is wasted until the large program has been completed. It can be seen that with many different-sized programs, fitting several of them into available memory space becomes difficult and leads to much unusable memory at any one time, i.e., fragmentation. Virtual memory avoids this by breaking all programs into pages of equal size, and dynamically paging them into main memory on demand. The identical concepts used in a virtual memory are applicable to a cache speed paged out of main memory; only the details of implementation are different, as shown later. We discuss first the basic concepts as applied to a main memory paged out of a disc and indicate, whenever possible, the differences applicable to a cache. All virtual memories start with a virtual address that is larger than the address of the available main memory. Such being the case, the desired information may not be resident in the memory. Hence it is necessary to find out if, in fact, it is present. If the information is resident, it is necessary to determine where it is residing because the physical address cannot bear a one-to-one correspondence to the virtual address. In fact, in the most general case, there is no relationship between the two, so that an address translation scheme is necessary to find where the information does reside. If the requested page is not resident in memory, a page fault results. This requires a separate process to find the page on the disc, remove some page from memory, and bring the new page into this open spot, called a page frame. Which page to remove from main storage is determined by a page-replacement algorithm that replaces some page “not recently used.” The address-translation and page-replacement functions are shown conceptually in Fig. 18.2.9.

FIGURE 18.2.9 Block diagram of address-translation and page-replacement process.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.58

COMPUTER STORAGE 18.58

DIGITAL COMPUTER SYSTEMS

Thus there are at last three fundamental requirements: (1) a mapping function to specify how pages from the disc are to be mapped into physical locations in memory, (2) an address translation function to determine if and where a virtual page is located in main memory, and (3) a replacement algorithm to determine which page in memory is to be removed when the if translation is a “no,” i.e., when a page fault occurs. These are the three fundamental requirements needed to implement a virtual memory system, either a virtual main memory as here described, or a cache.

MAPPING FUNCTION AND ADDRESS TRANSLATION The mapping function is a logical construct, whereas the address-translation function is the physical implementation of the mapping function. Mapping functions cover a range from direct mapping to fully associative mapping, with a continuum of set-associative mapping functions in between. A very simple way to understand maps is to consider the example of building one’s own personal telephone directory “paged’ out of larger telephone books. Assume that the personal directory is of fixed size, say 4(26) = 104 names, addresses, and associated telephone numbers. A direct mapping from the large books to the personal director is an alphabetical listing of the 104 names. Given any name, we can go directly to the directory entry, if present. Such an address translation could be hard-wired if desired. There are two difficulties with direct mapping: (1) it is very difficult to change, and (2) suppose we allow one entry for Jones. If later we wish to include another Jones, there is no room. If both Joneses are needed, there is a conflict unless we restructure the entire directory. Because of such conflicts, direct maps are seldom used. At the other end of the spectrum is a fully associative directory in which 104 names in any combinations are placed in any positions of the directory. This directory is very easily changed because a name not frequently used can simply be removed and a new name entered in its place without regard to the logical (alphabetical) structure of the two names. For instance, if the directory is full and we wish to make a new entry Zeyer, we first find a name not used much recently. If Abas is in position 50 of the directory, remove Abas and replace the entry with Zeyer. The major difficulty, obviously, is in searching the directory. If we wish to know the number for Smith, we must associatively search the entire directory (worst case) to find the desired information. This is very time-consuming and impractical in most cases. Imagine the usual telephone directory that is associatively organized. There are several ways to resolve the fundamental conflict between ease of search and ease of change. The fully associative directory can be augmented with a separate, directly organized and accessed table that contains a list of all names. However, the only other piece of data is a number indicating the entry this name now occupies in the associative directory. If a directory entry is changed, the new entry number must be placed in this table. If we wish to access a given name, a direct access to the table gives the entry number . A subsequent direct access to this entry number gives the desired address and telephone number. The penalty is the two accesses plus the storage and maintenance of the translation table. Nevertheless, this is exactly the scheme used in all virtual main memories paged out of disc, drum, or tape. This table is typically broken into a hierarchy of two tables called segment table and page table to facilitate the sharing of segments among users and to allow the tables to also be pages as units of, say, 4K B. These tables are built, manipulated, and maintained by supervisory programs and generally are invisible to the user. Although these tables can consume large amounts of main memory and of system overhead, the saving greatly exceeds the loss. An example of such a virtual memory with a fully associative mapping using a two-level table-translation scheme is illustrated in Fig. 18.2.10. Each user has a separate segment and page table (the two are, in principle, one table) stored in main memory along with the user’s data and programs. When an access is required to a virtual address, say NvNr, as in Fig. 18.2.10, a sequence of several accesses is required. The user ID register bits m give a direct address to the origin of that user’s segment table in main memory. The higher-order segment-index bits (SI) of the virtual address (typically 4 to 8 b) specify the index (depth) into this table for the required entry. This segment table entry contains a flag specifying if this entry is valid or not, and a where specifying the origin of that user’s page table in memory, as shown. The lower-order page index bits (PI) of the virtual address specify the index into the page table as shown. The page-table entry so accessed contains an if bit to indicate whether the entry is valid (if page is present in main memory) and a where address that

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.59

COMPUTER STORAGE COMPUTER STORAGE

18.59

FIGURE 18.2.10 Virtual-storage-address translation using a two-level table (segment and page) for each user.

gives the real main-memory address of the desired page. The lower-order Nr bits of the address, typically 11 or 12 b for 2K- or 4K-B pages, are real, representing the word or byte of the page and hence do not require translation. In cache memories, speed is so important that a fully associative mapping with a table-translation scheme is impractical. Instead a set-associative mapping and address translation is implemented directly in hardware. Although it is not generally realized, set-associative mapping and translation is commonly used in everyday life. It is a combination of a partially direct mapping and selection, with a partially associative mapping and translation. Examples include the 4-way set associative organization in Fig. 18.2.11 and the most common type of personal telephone directory as shown in Fig. 18.2.12, where a small index knob is moved to the first letter of the name being sought. Suppose there is one known position on the directory for each letter of the alphabet and we organize it with a set associativity of four. This means that for each letter there are exactly four entries or names possible, and these four can be in any order, as shown by the insert of Fig. 18.2.12. To find the telephone number for any given name such as Becker, we first do a direct selection to the letter B followed by an associative search over the four entries. Thus it is apparent that a set-associative access is a combination of the two limiting cases, namely, part direct and part associative. Many different combinations are possible with various advantages and disadvantages. This set-associative directory (Fig. 18.2.12) with four names or entries per set is exactly the same fundamental type used in many cache-memory systems. The directory is implemented in a random-access memory array as shown in Fig. 18.2.11, where there are 128(4) entries total, requiring nine total virtual page address bits. Each set of 4 is part of one physical word (horizontal) of the random-access array, so there are 128 such

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.60

COMPUTER STORAGE 18.60

DIGITAL COMPUTER SYSTEMS

FIGURE 18.2.11 Cache directory using 4-way set associative organization.

FIGURE 18.2.12 Fundamentals of a general set-associative directory showing direct and associative parts of the addressing process.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.61

COMPUTER STORAGE COMPUTER STORAGE

18.61

words, requiring seven address bits. The total virtual address nv = 9 must be used in the address translation to determine if and where the cache page resides. As before, the lower-order bits nr, which represent the byte within the page, need not be translated. Seven virtual bits are used to select directly one of the 128 sets as shown. This is analogous to moving the index knob in Fig. 18.2.12 and reads out all four names of the set. The NAME part is 2 b long, representing one of the four of the set. All four are compared simultaneously with the 2 b of the virtual address. If one of these gives a “yes” on compare equal, then the correct “real” address of the page in the cache, which resides in the directory with the correct NAME, is gated to the “real” cache-address register.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.62

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 18.3

INPUT/OUTPUT

INPUT-OUTPUT EQUIPMENT Input-output (I/O) equipment includes cathode-ray tubes and other display devices; printers; keyboards; character, handwriting, voice, and other recognition devices; optical scanners and facsimile equipment; speech synthesizers; process control sensors and effectors; various pointing devices such as “mice,” wands, joysticks, and touch screens; card readers and punches; paper tapes; magnetic storage devices such as tape cassette units, floppy and hard disk drivers, drums, and various other types of tape drives; and several different types of optical storage devices, mainly disks. Such equipment presents a wide range of characteristics that must be taken into account at their interface with the central processing unit (CPU) and its associated main memory. A magnetic tape unit usually operates serially by byte, writing or reading information of varying record length. Different types of printers deal with information on a character, line, or page basis, both in a coded (character) and noncoded mode. Most cathoderay tube display devices build up the image in a bit-by-bit raster, but others handle data in the form of vectors or even directly coded as characters. Different types of I/O gear operate over widely different speed ranges, ranging from a few bytes per second to millions of bytes per second. Many I/O devices do little more than pass bits to a processor, while others have substantial internal processing power of their own. In the many systems making use of telecommunications, an ever-growing array of I/O equipment can be attached to a variety of telecommunications links, ranging from twisted-pair copper wire to long-haul satellite and optical fiber links. Virtually any I/O device can be attached both locally and remotely to other elements of a system. Data are entered by human operators, read from storage devices, or collected automatically from sensors or instruments. Keyboards, wands, and character recognition equipment are typical input devices, while printers and displays are the most common output devices. Storage devices that can function in a read-write (R/W) mode naturally can fill both functions.

I/O CONFIGURATIONS Input-output devices can be attached to other elements of the system in a wide variety of ways. When the attachment is local, the attachment may be an I/O bus, or a channel. Figures 18.3.1 and 18.3.2 show a typical bus attachment configuration of small systems such as personal computers. A program gains access to the I/O gear through the logic circuitry of the ALU. For example, to transfer information to I/O gear, a program might extract requested information from storage, edit and format it, add the address of the I/O device to the message, and deliver it to the bus, from which the proper I/O device can read it and deliver it to the requestor. In this simple

18.62 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.63

INPUT/OUTPUT INPUT/OUTPUT

18.63

configuration, time must be spent by the CPU waiting for delivery of the information to the I/O gear and, if so programmed, for reception of an acknowledgement of the reception. If several types of I/O gear are used, the program or—more commonly—the operating system must take into account variations in formatting control, and speed of each type. A common method of improving the performance of the I/O subsystem is I/O buffering. A buffer may be part of the CPU or a segment of main memory, which accumulates information at machine speeds so that subsequent information transfers to the I/O gear can take place in an orderly FIGURE 18.3.1 Machine organization with an I/O bus. fashion without holding up the CPU. In other cases, the I/O gear itself will be buffered, thus enabling, for example, synchronous input devices to operate at maximum throughput without having to repeatedly interrupt the CPU. A buffered system is desirable in all but the simplest configurations and is all but essential in any case in which the speed of the I/O device is not matched to the data rate of the receiving communications link or I/O bus.

I/O MEMORY–CHANNEL METHODS The arrangement shown in Fig 18.3.2 involves information transfer from the I/O gear into the ALU and thence to main store. With a modularized main store, I/O data can be directly entered in, and extracted from, main storage. Direct access to storage implies control interrelationships at the interface between the I/O and the CPU to ensure coordination of accesses between the two units. The basic configuration used for direct access of I/O into storage is shown in Fig. 18.3.2. The connecting unit between the main store and the I/O gear is called a channel. A channel is not merely a data path but a logical device that incorporates control circuits to fulfill the relatively complex functions of timing, editing, data preparation, I/O control, and the like.

TERMINAL SYSTEMS Terminal systems use key-entry devices with a display and/or printer for direct access to a computer on a timeshared basis. The programming system in the CPU is arranged to scan a number of such terminals through a set of ports. Each character, line, or block of data from each terminal is entered into a storage area in the computer until an appropriate action signal is received. Then by means of an internal program the computer generates such responses as error signals, requests for data, or program outputs. Terminal systems extend the power of a large computer to many users and give each user the perception of using an independent system. Terminals remotely connected to host CPUs have the same overall description, except that communication is mediated by means of various network architectures and protocols (q.v.).

FIGURE 18.3.2 Machine organization with an I/O bus and I/O buffer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.64

INPUT/OUTPUT 18.64

DIGITAL COMPUTER SYSTEMS

A terminal that contains a high level of logic operational capability is often called an intelligent terminal. A personal computer with a communications adapter or modem is increasingly filling this description. If there are many terminals and a centralized computer or computer complex in a system, it is sometimes more economical to use simpler terminals and use the logic in the computer. More commonly, it has been the practice to attach a number of dumb terminals to an intelligent control unit, which, in turn, communicates with a host processor. Economic factors such as communication line cost and hardware cost, as well as such intangibles as system reliability and data security, are among those factors which influence the choice. The proliferation of communicating personal computers has formed the basis of local and wide area networks (LANs and WANs). These networks provide both connectively between sites and local processing power.

PROCESS-CONTROL ENTRY DEVICES In process control, e.g., chemical plants and petroleum refineries, inputs to the computer generally come from sensors (see Sec. 8) that measure such physical quantities as temperature, pressure, rate of flow, or density. These units operate with a suitable analog-to-digital converter that forms direct inputs to the computer. The CPU may in turn communicate with such devices as valves, heaters, refrigerators, pump, and so forth, on a time-shared basis, to feed the requisite control information back to the process-control system.

MAGNETIC-INK CHARACTER-RECOGNITION EQUIPMENT A number of systems have been developed to read documents by machine. Magnetic-ink character recognition is one. Magnetic characters are deposited on paper or other carrier materials in patterns designed to be recognized by machines and by operators. A change in reluctance, associated with the presence of magnetic ink, is sensed by a magnetic read head. The form of the characters is selected so that each yields a characteristic signature.

OPTICAL SCANNING Information to be subjected to data processing comes from a variety of sources, and often it is not feasible to use magnetic-ink characters. To avoid the need for retranscription of such information by key-entry methods, devices to read documents optically have been developed. Character recognition by optical means occurs in a number of sequential steps. First, the characters to be recognized must be located and initial starting points on the appropriate characters found. Unless the characters have been intentionally constrained to be placed in well-defined locations, it is then necessary to segment the characters, that is, to determine where one character leaves off and the next one begins. Third, the characters must be scanned to generate a sequence of bits that represents the character. The resulting bit pattern must then be compared with prestored reference patterns in order to identify the pattern in question. The scanning can be performed in a number of ways. In one technique the pattern to be recognized is transported past a linear photodetector array and the output is sampled at periodic intervals to determine the bit pattern. A second way is to scan a laser beam across the page using a rotating mirror or holographic mirror device, with the light scattered from the document being detected by an assembly of photodetectors. Other approaches that use various combinations of arrays of sources and detectors with mechanical light deflection devices are also used. As an alternative to comparing the scanned bit pattern with stored references, a correlation function between the input and stored functions can be computed. Optical spatial filtering is occasionally used, too. Another common method of data entry is through the encoding of characters into a digital format, permitting scan by a wand or, alternatively, by moving the pattern past a “stationary” scanning laser pattern. Typical of this encoding scheme is the bar code, known as the universal product code (UPC), which has been standardized in the supermarket industry by industry agreement to the code selection by the Uniform Grocery Product Code Council, Inc.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.65

INPUT/OUTPUT INPUT/OUTPUT

18.65

BATCH-PROCESSING ENTRY One computer can be used to enter information into another. For example, it is often advantageous for a small computer system to communicate with a larger one. The smaller system may receive a high-level language problem, edit and format it, and then transmit it to a larger system for translation. The larger system, in turn, may generate machine language that is executed at the remote location when it is transmitted back. Remote systems may receive data, operate on them, and generate local output, with portions transmitted to a second unit for filing or incorporation into summary journals. In other cases, two CPUs may operate generally on data in an independent fashion but may be so interconnected that in the event of failure one system can assume the function of the other. In other cases, systems may be interconnected to share the work load. Computer systems that operate to share CPU functions are called multiprocessing systems.

PRINTERS Much of the output of computers takes the form of printed documents, and a number of printers have been developed to produce them. The two basic types of printer are impact printers, which use mechanical motion of type slugs or dot-forming hammers driven against a carbon film or ink-loaded fabric ribbon to mark paper, and nonimpact printers, which use various physical and chemical phenomena to produce the characters. Printers are additionally characterized by the sequence in which characters are placed on the page. Serial printers print one character at a time, by means of a moving print head. After a line of characters has been printed, the paper is indexed to the next print line. Line or parallel printers are designed such that they print all the characters on a print line at (nearly) the same time, after which the paper is indexed. Finally, page printers are limited by the technology to marking an entire piece of paper before the output is accessible to a user. Most such printers actually generate a page one dot at a time using some type of raster scanning means. Printers may either be fully formed character printers, or matrix (or all-points-addressable) printers. Matrix printers build up characters and graphics images dot by dot; some matrix-printing mechanisms allow the dots to be placed anywhere on a page, while others are constrained to printing on a fixed grid. Dot size (resolution) ranges from 0.012 in. (0.3 mm) in many wire matrix printers to 0.002 in. (0.05 mm) in some nonimpact printers. Dot spacing (addressability) ranges from as few as 60 dots per inch (24 dots per centimeter) to more than 800 dots per inch. The speed of serial printers typically ranges from 10 to over 400 characters per second. Line printer speeds range from 200 to about 4000 lines per minute. Page printers print from 4 to several hundred pages per minute.

IMPACT PRINTING TECHNOLOGIES Figure 18.3.3 is a sketch of a band printer, the most common high speed impact line printer. Band printers are used where price and throughput (price/performance) are the dominant criteria. In this printer, the characters are engraved on a steel band, which is driven continuously past a bank of hammers—usually, 132—at speeds of the order of 500 cm/s. A ribbon is suspended between the band and the paper, behind which hammer bank is located. This is known as back printing. When a desired character is located at a specific print position, that hammer is actuated without stopping the band. Several characters may be printed simultaneously, although activating too many hammers at one time must be avoided, since that would slow or stop the band and overload the hammer power supply. Timing marks (and an associated sensor) provide precise positional information for the band. Figure 18.3.4 is a schematic sketch of a print head for a wire matrix printer. These printers are most commonly used with personal computers. The print head contains nine hammers, each of which is a metal wire. The wires are activated in accordance with information received from a character generator so that a desired character can be printed in an array of dots. The wires strike a ribbon, and the print head is positioned on a carrier. Printing is done while the carrier is in continuous motion, and it is necessary to know the position to an

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.66

INPUT/OUTPUT 18.66

DIGITAL COMPUTER SYSTEMS

FIGURE 18.3.3 An engraved band print mechanism.

FIGURE 18.3.4 Schematic drawing of wire-matrix print head and print mechanism.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.67

INPUT/OUTPUT INPUT/OUTPUT

18.67

accuracy of a fraction of a dot spacing, that is, to much better than 0.1 mm. To do this, an optical grating is placed on the printer frame, and the motion of the carrier is detected by the combination of a simple light source and photodetector. Other sensing and control schemes are also used. Quite attractive printing can be generated with a wire matrix printer if the dots are more closely spaced. This can be done in the serial printer shown here by printing the vertical rows of dots in an overlapping fashion, and then indexing the page vertically only one-half or one-third the dot spacing in the basic matrix, so that the dots also overlap in the other direction. Many variations of this print mechanism are available. Among them are the use of 18 or 24 wires, and the use of multicolor ribbons to enable color printing.

NONIMPACT PRINTERS Nonimpact printers use various mechanisms to imprint characters on a page. There are two major classes of such devices, those using special paper and those using ordinary paper. In one type of special-paper device, the paper is coated with chemicals that form a dark or colored dye when heated. Characters are formed by contact with selective heated wires or resistors that form a matrix character. Another type of special paper, known as electro-erosion paper, is coated with a thin aluminum metal film that can be evaporated when subjected to an intense electric spark, obtained from a suitably actuated array of wires. Another method of nonimpact printing on special paper, called electrography, or, more often, electrostatic printing, uses a dielectric coated paper that is directly charged by ion generation. In this case an electrostatic latent image is generated by an array of suitably addressed styli. The charged paper can be toned and fixed as in electrophotography, also known as xerography. In the xerographic process, the image is developed by exposing the latent image to a toner, which may be very small solid powder particles or a suspension of solid particles or a dye in a neutral liquid. The toner particles, suitably charged, adhere either to the charged or the neutral portions of the image. This pattern is then fixed, by heat, pressure, or evaporation of the liquid carrier. Most plain-paper nonimpact page printers combine laser and electrophotographic techniques to produce high-speed, high-quality computer printout. The printing process uses a light-sensitive photoconductive material (such as selenium, or various organic compounds) wrapped around a rotating drum. The photoconductor is electrically charged and then exposed to light images of alphanumeric or graphic (image) information). These images selectively discharge the photoconductor where there is light and leave it charged where there is no light. A powdered black toner material is then distributed over the photoconductor where it adheres to the unexposed areas and does not adhere to the exposed areas, thus forming a dry-powder image. This image is then electrostatically transferred to the paper where it is fixed by fusing it with heat or pressure. In some printers the toner is suspended in a liquid carrier, and the image is fixed by evaporation of the carrier liquid. A different choice of toner materials allows development of the unexposed rather than the exposed areas. The various elements of this process can all be quite complex and many variations are used. (See Fig. 18.3.5.)

IMAGE FORMATION Electrophotography, transfer electrophotography, and thermal methods require that optical images of characters be generated. In some types of image generation, the images of the characters to be printed are stored. Other devices use a linear sweep arrangement with an on-off switch. The character image is generated from a digital storage device that supplies bits to piece together images. In the latter system any type of material can be printed, not just the character sets stored; the unit is said to have a noncoded-information (NCI) capability or all-points-addressable (APA) capability. Patterns are generated either by scanning a single source of energy, usually a laser in printers and an electron beam in display devices, and selectivity modulating the output, or else by controlling an array of transducers. The array can be the actuators in a wire matrix printer, light-emitting diodes, ink-jet nozzles (see below), an array of ion sources, thermal elements, or magnetic or electrical styli.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.68

INPUT/OUTPUT 18.68

DIGITAL COMPUTER SYSTEMS

FIGURE 18.3.5 Xerographic printer mechanism.

The output terminals for digital image processing are high-resolution monochrome or color monitors for realtime displays. For off-line processing, a high-resolution printer is required. The inputs to digital image-processing requirements are generally devices such as the vidicon, flying-spot scanner, or color facsimile scanner.

INK JETS Another method of direct character formation, usually used on untreated paper, employs ink droplets. An example of this technique is continuous-droplet ink-jet printers, which have been developed for mediumto high-quality serial character printing and is also usable for very high speed line printing. When a stream of ink emerges from a nozzle vibrated at a suitable rate, droplets tend to form in a uniform, serial manner. Figure 18.3.6 shows droplets emerging from a nozzle and being electrostatically charged by induction as they

FIGURE 18.3.6 Ink-jet printing. Ink under pressure is emitted from a vibrating nozzle, producing droplets that are charged by a signal applied to the charging electrodes. After charging, each drop is deflected by a fixed field, the amount of deflection depending on the charge previously induced by the charging electrodes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.69

INPUT/OUTPUT INPUT/OUTPUT

18.69

FIGURE 18.3.7 Nozzle-per-spot binary ink-jet printer.

break off from the ink stream. In subsequent flight through an electrostatic field, the droplets are displaced according to the charge they received from the charging electrode. Droplets generated at high rates (of the order of 100,000 droplets per second) are guided in one dimension and deposited upon untreated paper. The second dimension is furnished by moving the nozzle relative to the paper. Since the stream of droplets is continuous, it is necessary to dispose of most of the ink before it reaches the paper. This is usually done by selectively charging only those drops that should reach the paper; uncharged drops are intercepted by a gutter and recirculated. An array of nozzles can be used in combination with paper displacement to deposit characters. The simplest form of such a printer has a separate nozzle for each dot position extending across the page (Fig. 18.3.7). Uncharged droplets proceed directly to the paper, while unwanted drops are charged such that they are deflected into a gutter for recirculation. Moving the paper past the array of nozzles provides full two-dimensional coverage of the paper. Such a binary nozzle-per-spot printer can print tens of thousands of lines per minute. Ink-jet systems have also been developed using very fine matrices for character generation, producing high document quality.

VISUAL-DISPLAY DEVICES Visual-display devices associated with a computer system range from console lights that indicate the internal state of the system to cathode-ray-tube displays that can be used for interactive problem solving. In the cathode-ray tube, a raster scan, in conjunction with suitable bit storage, generates the output image. For certain graphics applications, it is preferable to write the vectors making up the image directly, rather than by means of raster deflection. Such a vector mode provides higher-quality output, but can be overcome by refresh limitations when there is a large number of vectors to be displayed. Such displays are usually used with a keyboard for information entry, so that the operator and computer can operate in an interactive mode. Graphic input devices such as mice, joysticks, touch panels, and the like are also often used. A relatively low cost, low power, and high-performance display is the liquid-crystal display (LCD). Liquid crystals differ from most other displays in that they depend on external light sources for visibility; i.e., they

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.70

INPUT/OUTPUT 18.70

DIGITAL COMPUTER SYSTEMS

employ a light-valve principle. This type of display is composed of two parallel glass plates with conductive lines on their inner surfaces and a liquid-crystal compound (e.g., of the nematic variety) sandwiched between them. In the dynamic scattering display, the clear organic material becomes opaque and reflective when subjected to an electric field. As in other displays, the characters are built up from segments or dots. Color LCDs are available. In one version, a dot triad is used, with thin-film color filters deposited directly on the glass. Again, the complexity and cost are such that where low power and the thin-form factors are not critical, the technology has been uncompetitive with CRTs.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.71

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 18.4

SOFTWARE

NATURE OF THE PROBLEM Even though hardware costs have been declining dramatically over a 30-year period, the overall cost of developing and implementing new data processing systems and applications has not decreased. Because developing software is a predominantly labor-intensive effort, overall costs have been increasing. Furthermore, the problems being solved by software are becoming more and more complex. This creates a real challenge to achieve intellectual and management control over the software development process. The successful development of software requires discipline and rigor coupled with appropriate management control arising from adequate visibility into the development process itself. This has led to the rise of software engineering, defined as the application of scientific knowledge to the design and construction of computer programs and the associated documentation and to the widespread use of standardized commercially available software packages. In addition, a set of software tools has been developed to assist in system analysis of designs. The tools, often called computer assisted systems engineering, or CASE tools, mechanize the graphic and textual descriptions of processes, test interrelationships, and maintain cross-referenced data dictionaries.

THE SOFTWARE LIFE-CYCLE PROCESS In the earlier history of software the primary focus was on its development, but it has become evident that many programs are not one-shot consumables but are tools intended to be used repetitively over an extended time. As a result, it is obvious that the entire software life cycle must be considered. The software life cycle is that period of time over which the software is defined, developed, and used. Figure 18.4.1 shows the traditional model of the software life-cycle process and its five major phases. It begins with the definition phase, which is the key to everything that follows. During the definition phase, the system requirements to be satisfied by the system are developed and the system specifications, both hardware and software, are developed. These specifications describe what the software product must accomplish. At the same time, test requirements should also be developed as a requisite for systems acceptance testing. The design phase is concerned with the design of a software structure that can meet the requirements. The design describes how the software product is to function. During the development phase, the software product is itself produced, implemented in a programming language, tested to a limited degree, and integrated. During the test phase, the product is extensively tested to show that it does in fact satisfy the user’s requirements. The operational phase includes the shipment and installation of the data-processing system in the user’s facility. The system is then employed by the user, who usually embarks on a maintenance effort, modifying the system to improve its performance and to satisfy new requirements. This effort continues for the remainder of the life of the system.

18.71 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.72

SOFTWARE 18.72

DIGITAL COMPUTER SYSTEMS

FIGURE 18.4.1 Traditional model of the software life-cycle process showing its five major phases.

PROGRAMMING When a stored-program digital computer operates, its storage contains two types of information: the data being processed and program instructions controlling its operations. Both types of information are stored in binary form. The control unit accesses storage to acquire instructions; the ALU makes reference to storage to gain access to data and modify it. The set of instructions describing the various operations the computer is designed to execute is referred to as a machine language, and the act of constructing programs using the appropriate sequences of these computer instructions is called machine-language programming. It is possible but expensive to write them directly in machine languages, and maintenance and modification is virtually impossible. Programming languages have been created to make the code more accessible to its writers. A programming language consists of two major parts: the language itself and a translator. The language is described by a set of symbols (the alphabet) and a grammar that tells how to assemble the symbols into correct strings. The translator is a machine-language program whose main function is to translate a program written in the programming language (the source code) into machine language (object code) that can be executed in the computer. Before describing some of the major programming languages currently in use, we consider two important programming concepts, alternation and iteration, and also see by examples some of the difficulties associated with machine-language programming.

ALTERNATION AND ITERATION These techniques are illustrated here using a computer whose storage consists of 10,000 words each containing 4 bytes numbered 1 to 4. The instruction format is

Op code

Name

01 02 03 08 20

LOAD COMP ADD STORE BRLO

1

2

3

op code

0

Address

4

Description Loads value from addressed word into data register Compares value of addressed word with data-register value Adds value of addressed word to data register Copies value of data register into addressed storage word Branches if data-register value from last previously executed COMP was less than comparand

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.73

SOFTWARE SOFTWARE

18.73

The computer used in this simplified example contains a separate nonaddressable data register that contains one word of data. Further, each instruction is accessed at an address 1 more than that of the previously executed instruction unless that instruction was BRLO instruction with a low COMP condition, in which case the address part of the BRLO instruction is the address at which the next instruction is to be accessed. Consider the following program instructions (beginning at address 0100) to select the lower value of two items (in words 0950 and 0951) and place the selected value in a specific place (word 0800): Address

Instruction

Effect

0100 0101 0102 0103 0104

01000950 02000951 20000104 01000951 08000800

Place first-item value in data register Compare second-item value with data-register value Branch to next instruction at address 0104 if data-register value was lower Place second item value in data register Store lower value in result (word 0800)

FLOWCHARTS One way to depict the logical structure of a program graphically is by the use of flowcharts. Flowcharts are limited in what they can convey about a computer program, and with the advent of modern programming design languages they are becoming less widely used. However, they are used here to portray these simple programs graphically. The program of the preceding example is depicted by the flowchart shown in Fig. 18.4.2. The flowchart contains boxes representing processes (rectangular boxes) and decisions (alternations— diamond-shaped boxes). The arrows connecting the boxes represent the paths and sequences of instruction execution. An alternation represents an instruction (or a sequence of instructions) with more than one possible successor depending on the result of some processing test (this is commonly a conditional branch). In the example, instruction 0103 is or is not executed depending on the values of the two items. If the example is extended to require finding the least of four item values, the flowchart is that shown in Fig. 18.4.3. If the example is further extended to find the largest value of 1000 items (in locations 0336 through 0790 inclusive in hexadecimal), the flowchart and the corresponding program become very large if analogous extensions of the flowcharts are used. The alternative is to use the technique known as the program loop. A program loop for this latter example is:

FIGURE 18.4.2 Flowchart of a simple program. The boxes represent processes; the diamonds represent decisions.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.74

SOFTWARE 18.74

DIGITAL COMPUTER SYSTEMS

Address

Instruction

Effect

0100 0101 0102 0103 0104 0105 0106 0107 0108 0109 010A 010B 010C 010D 0900 0901 0902

01000950 08000800 01000900 08000104 (00000000) 02000800 20000108 08000800 01000104 03000901 08000104 02000902 2000104 end 01000951 00000001 010003E9

Move first item as initial value of result (ITEMHI)

FIGURE 18.4.3 Flowchart of a repetitive task.

Initialize loop to begin with item 2 Loop, Nth item to data register Compare with prior ITEMHI value Branch to 108 if Nth item value low Store Nth item value as ITEMHI Increment value, of N by 1 Compare against N = 1001 Branch for looping if N < 1001 Load item 2; initial instruction Address increment of 1 Limit test; load 1001st item

FIGURE 18.4.4 Flowchart showing a program loop.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.75

SOFTWARE SOFTWARE

18.75

The corresponding flowchart appears in Fig. 18.4.4. The loop proper (instructions 0104 to 010C) is executed 999 times. The instruction at 0104 accesses the Nth item and is indexed each time the program flows through the loop so that on successive executions successive words of the item table are obtained. After each loop execution, a test is made to determine whether processing is complete or a branch should be made back to the beginning of the loop to repeat the loop program. The loop proper is preceded by several instructions that initialize the loop, presetting ITEMHI and the instruction 0104 value for the first time through. A loop customarily has a process part, and an induction part to make changes for the next loop iteration, and an exit test or termination to determine whether an additional iteration is required.

ASSEMBLY LANGUAGES The previous example illustrates the difficulty of preparing and understanding even simple machine-language programs. One help would be the ability to use a symbolic (or mnemonic) representation of the operations and addresses used in the program. The actual translation of these symbols to specific computer operations and addresses is a more or less routine clerical procedure. Since computers are well suited to performing such routine operations, it was quite natural that the first automatic programming aids, assembly languages and their associated assembly programs, were developed to take advantage of that fact. Assembly languages permit the critical addressing interrelations in a program to be described regardless of the storage arrangement, and they can produce therefrom a set of machine instructions suitable for the specific storage layout of the computer in use. An assembly-language program for the 1000-value program of Fig. 18.4.4 is shown in Fig. 18.4.5. The program format illustrated is typical. Each line has four parts: location, operation, operand(s), and comments. The location part permits the programmer to specify a symbolic name to be associated with the address

FIGURE 18.4.5 An assembly program. The program statements are in a one-to-one correspondence with machine instructions. Hence the procedure is fully supplied by the programmer according to the particular macroinstruction set of the system. The assembly language alleviates housekeeping routines, such as specific assignments, and makes user-oriented symbols possible instead of numeric or binary code.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.76

SOFTWARE 18.76

DIGITAL COMPUTER SYSTEMS

of the instruction (or datum) defined on that line. The operation part contains a mnemonic designation of the instruction operation code. Alternatively, that line may be designated to be a datum constant, a reservation of data space, or a designation of an assembly pseudo operation (a specification to control the assembly process itself). Pseudo operations in the example are ORG for origin and END to designate the end of the program. The operand field(s) give the additional information needed to specify the machine instruction, e.g., the name of a constant, the size of the data reservation, or a name associated with a pseudo operation. The comment part serves for documentation only; it does not affect the assembly-program operation. After a program is written in assembly language, it is processed by an assembler. The assembly program reads the symbolic assembly-language input and produces (1) a machine instruction program with constants, usually in a form convenient for subsequent program loading, and (2) an assembly listing that shows in typed or printed form each line of the symbolic assembly-language input, together with any associated machine instructions or constants produced therefrom. The assembly pseudo operation ORG specifies that the instructions and/or constant entries for succeeding lines are to be prepared for loading at successive addresses, beginning at the specified load origin (value of operand field or ORG entry). Thus the 13 symbolic instructions following the initial ORG line in Fig. 18.4.5 are prepared for loading at addresses 0100 through 010C inclusive, with the following symbolic associations established: Location symbol

(Local) address

START LOOP ST LOOP INC

100 104 108

Four instructions of this group of 13 contain the symbol LOOP ST in the operand field, and the corresponding machine instructions will contain 0104 in their address parts. The operation of a typical assembly program therefore consists of (1) collecting all location symbols and determining their values (addresses), called building the symbol table, and (2) building the machine instructions and/or constants by substituting op codes for the OP mnemonics and location symbol values for their positions in the operand field. The symbol table must be formed first since, as the first instruction in the example shows, a machine instruction may refer to a location symbol that appears in the location field near the program end. Thus most assembly programs process the program twice; the first pass builds the symbol table, and the second pass builds the machine-language program. Note in the example the use of the operation RESRV to reserve space (skipping in the load-address sequence) for variable data. Assembly language is specific to a particular computer instruction repertoire. Hence, the basic unit of assembly language describes a single machine instruction (so-called one-for-one assembly process). Most assembly languages have a macroinstruction facility. This permits the programmer to define macros that can generate desired sequences of assembly-language statements to perform specific functions. These macro definitions can be placed in macro libraries, where they are available to all programmers in the facility. The term procedure (also subroutine and subprogram) is used to refer to a group of instructions that perform some particular function used repeatedly in essentially the same context. The quantities that vary between contexts may be regarded as parameters (or arguments) of the procedure. The method of adaptation of the procedure determines whether it is an open or closed procedure. An open subroutine is adapted to its parameter values during code preparation (assembly or compilation) in advance of execution, and a separate copy of the subroutine code is made for each different execution context. A closed subroutine is written to adapt itself during execution to its parameter values; hence, a single copy suffices for several execution contexts in the same program. The open subroutine executes faster since tailoring to its parameter values occurs before execution begins. The closed subroutine not only saves storage space, since one copy serves multiple uses, but is more flexible, in that parameter values derived from the execution itself can be used. A closed subroutine must be written to determine its parameter values in a standard way (including the return point after completion). The conventions for finding the values and/or addresses of values are called the subroutine linkage conventions. Quite commonly, a single address is placed in a particular register, and this address in turn points to a consecutively addressed list of addresses and/or values to be used. Subroutines commonly use (or call) other closed subroutines, so that there are usually a number of levels of subroutine control

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.77

SOFTWARE SOFTWARE

18.77

available at any point during execution. That is, one routine is currently executing, and others are waiting at various points in partially executed condition.

HIGH-LEVEL PROGRAMMING LANGUAGES On general-purpose digital computers, high-level programming languages have largely superseded assembly languages as the predominant method of describing application programs. Such programming languages are said to be high-level and machine-independent. High-level means that each program function is such that several or many machine instructions must be executed to perform that function. Machine-independent means that the functions are intended to be applied to a wide range of machine-instruction repertoires and to produce for each a specific machine representation of data. The high-level language translator is known as a compiler, i.e., a program that converts an input program written in a particular high-level language (source program) to the machine language of a particular machine type (object program) each time the source code is executed.

HIGH-LEVEL PROCEDURAL LANGUAGES Most of the high-level programming languages are said to be procedural. The programmer writing in a highlevel procedural language thinks in terms of the precise sequence of operations, and the program description is in terms of sequentially executed procedural statements. Most high-level procedural languages have statements for documentation, procedural execution, data declaration, and various compiler and execution control specifications. The program in Fig. 18.4.6, written in the FORTRAN high-level language, describes the program function given in Fig. 18.4.5 in assembly language. The first six lines are for documentation only, as indicated by C in the first column. The DIMENSION statement defines ITEM to consist of 1000 values. The assignment statement ITEMHI = ITEM (1) is read as “set the value of ITEMHI to the value of the first ITEM.” The next statement is a loop-control statement meaning: “do the following statements through the statement labeled 1 for the variable N assuming every value from 2 through 1000.” The statement labeled 1 causes a test to be made to “see if the Nth ITEM is greater than .GT. the value of ITEMHI, and if so, set the ITEMHI value equal to the value of the Nth item.

FIGURE 18.4.6 An example of a FORTRAN program, corresponding to the flowchart of Fig. 18.4.4 and assembly program of Fig. 18.4.5.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.78

SOFTWARE 18.78

DIGITAL COMPUTER SYSTEMS

FORTRAN The high-level programming languages most commonly used in engineering and scientific computation are C++, FORTRAN, ALGOL, BASIC, APL, PL/I, and PASCAL FORTRAN, the first to appear was developed during 1954 to 1957 by a group headed by Backus of IBM. Based on algebraic notation, it allows two types of numbers: integers (positive and negative) and floating point. Variables are given character names of up to six positions. All variables beginning with the letters I, J, K, L, M, or N are integers; otherwise they are floating point. Integer constants are written in normal fashion, 1, 0, –4, and so on. Floating-point constants must contain a decimal point, 3.1, – 0.1, 2.0, 0.0, and so on. For example, 6.02 × 1024 is written 6.02E24. This standard notation was adopted to accommodate the limited capability of computer input-output equipment. READ and WRITE statements permit values of variables to be read into or written from the ALU, from or to input, output, or intermediate storage devices. The latter may operate merely by transcribing values or may be accompanied by conversions or editing specified in a separate FORMAT statement. Some idea of the range of operations provided in FORTRAN is shown by the following value-assignment statement: ROOT = (–(B/2.0) + SQRT ((B/2.0) ** 2 – A*C))/A This is the formula for the root of a quadratic equation with coefficients A, B, and C. The asterisk indicates multiplication, / stands for division, and ** exponentiation. The notation: name (expression) and name (Expression, expression), and so forth, is used in FORTRAN with two distinct meanings depending on whether or not the specific name appears in a DIMENSION statement. If so, the expression(s) are subscript values; otherwise the name is considered to be a function name, and the expressions are the values of the arguments of the function. SQRT ((B/2.0) **2 – A*C) in the preceding assignment statement requires the expression (B/2.0)**2 – A*C to be evaluated, and then the function (square root here) of that value is determined. Square root and various other common trigonometric and logarithmic functions and their respective inverses are standardized in FORTRAN, typically as closed subroutines. The same notations may be employed for a function defined by a FORTRAN programmer in the FORTRAN language. This operation is performed by writing a separate FORTRAN program headed by the statement FUNCTION name (arg 1, arg 2, etc.) where arg represents the name that stands for the actual argument value at each evaluation of that function. Similarly, any action or set of actions described by a closed FORTRAN subroutine is called for by “CALL subroutines (args)” together with a defining FORTRAN subroutine headed by “SUBROUTINE subroutine name (args).”

BASIC BASIC is high-level programming language based on algebraic notation that was developed for solving problems at a terminal; it is particularly suitable for short programs and instructional purposes. The user normally remains at the terminal after entering his program in BASIC, while it compiles, executes, and types the output, a process that typically requires only a few seconds. Widely used in PCs, BASIC is usually bundled with PC operating systems. It is available in both interpretive and compiled versions, and may include an extensive set of programming capabilities. Visual BASIC is currently being used on display-oriented operating systems such as Windows.

APL A programming language (APL) is high-level language that is often used because it is easy to learn and has an excellent interactive programming system supporting it. Its primitive objects are arrays (lists, tables, and so forth). It has a simple syntax, and its semantic rules are few. The usefulness of the primitive functions is

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.79

SOFTWARE SOFTWARE

18.79

further enhanced by operations that modify their behavior in a systematic manner. The sequence control is simple because one statement type embraces all types of branches and the termination of the execution of any function always returns control to the point of use. External communication is established by variables shared between APL and other systems.

PASCAL An early high-level programming languages is PASCAL, developed by Niklaus Wirth. It has had widespread acceptance and use since its introduction in the early 1970s. The language was developed for two specific purposes: (1) to make available a language to teach programming as a systematic discipline and (2) to develop a language that supports reliable and efficient implementations. PASCAL provides a rich set of both control statements and data structuring facilities. Six control statements are provided: BEGIN-END, IF-THEN-ELSE, WHILE-DO, REPEAT-UNTIL, FOR-DO, and CASE-END. Similar control statements can be found in virtually all high-level languages. In addition to the standard scalar data types, PASCAL provides the ability to extend the language via userdefined scalar data types. In the area of higher-level structured data types, PASCAL extends the array facility of ALGOL 60 to include the record, set, file, and pointer data types. In addition to these, PASCAL contains a number of other features that make it useful for programming and teaching purposes. In spite of this, PASCAL is a systematic language and modest in size, attributes that account for its popularity.

ADA PROGRAMMING LANGUAGE (Ada is a registered trademark of the Department of Defense.) Ada is named after Lord Byron’s daughter. This language was developed by the U.S. Department of Defense to be a single successor to a number of high-level languages in use by the armed forces of the United States. It was finalized in 1980. The Ada language was designed to be a strongly typed language, with features from modern programming language theory and software engineering practices. It is a block-structured language providing mechanisms for data abstraction and modularization. It supports concurrent processing and provides user control over scheduling and interrupt handling.

C PROGRAMMING LANGUAGE Research is continuing in the development of new languages that support the concepts growing out of modern software technology development. One such language is the C programming language (a registered trademark of AT&T). C is a general-purpose programming language designed to feature modern control flow and data structures and a rich set of operators, yet provide an economy of expression. Although it was not specialized for any one area of application, it has been found especially useful for implementing operating systems and is being more widely used in communications and other areas. C++ is a later version of this language. An extension of this language has been called JAVA, developed by Sun Microsystems.

OBJECT-ORIENTED PROGRAMMING LANGUAGES A second area of programming language development has been the creation of object-oriented languages. These are used for message-object programming, which incorporates the concepts of objects that communicate by messages. An object includes data, a set of procedures (methods) that operate on the data, and a mechanism for

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.80

SOFTWARE 18.80

DIGITAL COMPUTER SYSTEMS

translating messages. These languages should contribute to improved reusability of software—a highly sought after goal to increase programming productivity and reduce software costs. These languages are based on the concept that “objects,” once defined, are henceforth available for reuse, without reprogramming. Programs can then be viewed as mechanisms that employ the appropriate object at the appropriate time to accomplish the task at hand.

COBOL AND RPG High-level programming languages used for business data-processing applications emphasize description and handling of files for business record keeping. Two widely used programming languages for business applications are COBOL (common business-oriented language) and RPG (report program generation). Compilers for these languages, with generalized sorting programs, form the fundamental automatic programming aids of many computer installations primarily used for business data processing. COBOL and RPG have comparable file, record, and field-within-record descriptive capabilities, but the specification of processing and sequence control device from basically different concepts.

OPERATING SYSTEMS There are many reasons for developing and using an operating system for a digital computer. One of the main reasons is to optimize the scheduling and use of computer resources, so as to increase the number of jobs that can be run in a given period. Creation of a multiprogramming environment means that the resources and facilities of the computing system can be shared by a number of different programs, each written as if it were the only program in the system. Another major objective for an operating system is to provide the full capability of the computing system to the user while minimizing the complexity and depth of knowledge of the computer system required. This is accomplished by establishing standard techniques for handling system functions like program calling and data manageFIGURE 18.4.7 The user’s view of the operating sysment and providing a convenient and effective interface to tem as an extension of the computing system yet an the user. In effect the user is able to deal with the operating integral part of it. system as an entity rather than having to deal with each of the computer’s features. As indicated in Fig. 18.4.7 each user will be thought of conceptually as a unit consisting of both the hardware and the programs and procedures that make up the operating system.

GENERAL ORGANIZATION OF AN OPERATING SYSTEM There are many ways to structure operating systems, but for the purpose of this discussion the organization shown in Fig. 18.4.8 is typical. The operating system is composed of two major sets of programs, control (or supervision) programs and processing programs. Control programs supervise the execution of the support programs (including the user application programs), control the location, storage, and retrieval of data, handle interrupts, and schedule jobs and resources needed in processing. Processing programs consist of language translators, service programs, and user-written application programs, all of which are used by the programmer in support of program development.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.81

SOFTWARE SOFTWARE

18.81

FIGURE 18.4.8 A typical operating system and its constituent parts.

The work to be processed by the computer can be viewed as a stack of jobs to be run under the management of the control program. A job is a unit of computational work that is independent of all other jobs concurrently in the system. A single job may consist of one or a number of steps.

TYPES OF OPERATING SYSTEMS There are many basic types of operating systems including multiprogramming, time-sharing, real-time, and multiprocessing. The multiprocessing system must schedule and control the execution of jobs that are distributed across two or more coupled processors. These processors may share a common storage, in which case they are said to be tightly (or directly) coupled, or they may have their own private storage and communicate via other means such as sending messages over networks, in which case they are said to be loosely coupled. Operating systems were generally developed for a specific CPU architecture or for a family of CPUs. For example, MS-DOS and WINDOWS apply to xx86-based systems. However, one operating system, the UNIX system (a registered trademark of AT&T), has been transported to a number of different manufactures’ systems and is in very wide use today. UNIX was developed as a unified, interactive, multiuser system. It consists of a kernel that schedules tasks and manages data, a shell that executes user commands—one at a time or in a series called a pipe—and a series of utility programs.

TASK-MANAGEMENT FUNCTION This function, sometimes called the supervisor, controls the operation of the system as it executes units of work known as tasks or processes. (The performance of a task is requested by a job step.) The distinction between a task and a program should be noted. A program is a static entity, a sequence of instructions, while a task is a dynamic entity, the work to be done in execution of the program. Task management initiates and controls the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.82

SOFTWARE 18.82

DIGITAL COMPUTER SYSTEMS

execution of tasks. If necessary it controls their synchronization. In allocates system resources to tasks and monitors their use. In particular it is concerned with the dynamic allocation and control of main storage space. Task management handles all interrupts occurring in the computer, which can arise from five different sources: (1) supervisor call interrupts occur when a task needs a service from the task-management function, such as initiating an I/O operation; (2) program interrupts occur when unusual conditions are encountered in the execution of a task; (3) I/O interrupts indicate that an I/O operation is complete or some unusual condition has occurred; (4) machine-check interrupts are initiated by the detection of hardware errors; and (5) external interrupts are initiated by the timer, by the operator’s console, or other external devices.

DATA MANAGEMENT This function provides the necessary I/O control system services needed by the operating system and the user application programs. It frees the programmer from the tedious and error-prone details of I/O programming and permits standardization of these services. It constructs and maintains various file organization structures, including the construction and use of index tables. It allocates space on disc (auxiliary) storage. It maintains a directory showing the locations of data sets (files) within the system. It also provides protection for data sets against unauthorized entry.

OPERATING SYSTEM SECURITY One of the major concerns in the design of operating systems is to make certain that they are reliable and that they provide for the protection and the integrity of the data and programs stored within the system. Work is under way to develop secure operating systems. These systems use the concept of a security kernel—a minimal set of operating system programs that are formally specified and designed so that they can be proved to implement the desired security policy correctly. This assures the corrections of all access-controlling operations throughout the system.

SOFTWARE-DEVELOPMENT SUPPORT There have been great strides in software engineering technology. Out of the research and development efforts in universities, industry, and government have emerged a number of significant ideas and concepts that can have significant and long-lasting influence on the way that software is developed and managed. Those concepts are just now starting to find their way into the software-development process but should become more widely used in the future. They are briefly reviewed below.

REQUIREMENTS AND SPECIFICATIONS This has been one of the problem areas through the years. Analysis and design errors are, by far, the most costly and crucial types of errors, and a number of attempts are being made to develop methods for recording and analyzing software requirements and developing specifications. Most requirements and specifications documents are still recorded in English narrative form, which introduces problems of inconsistency, ambiguity, and incompleteness. These problems are addressed with CASE tools and structured programming.

SOFTWARE DESIGN The work of Dijkstra, Hoare, and Mills (1968, 1976) had a major influence on software-design methodology by introducing a number of concepts that led to the development of structured programming. Structured programming is a methodology based on mathematical rigor. It uses the concept of top-down design and

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.83

SOFTWARE SOFTWARE

18.83

FIGURE 18.4.9 Basic set of structured control primitives.

implementation by describing the program at a high level of abstraction and then expanding (refining) this abstraction into more detailed representations through a series of steps until sufficient detail is present for implementation of the design in a programming language to be possible. This process is called stepwise refinement. The design is represented by a small, finite set of primitives, such as shown in Fig. 18.4.9. These three primitives are adequate for design purposes but for convenience several others have been introduced; namely, the indexed alternation or case, the do-until, and the indexed sequence or for-do structure. It also recognized that the organization and representation of software systems are clearer if certain data and the operations permitted on those data are organized into data abstractions. The internal details of the organization of the data are hidden from the user. The result of applying this methodology is the organization of a sequential software process into a hierarchical structure through the use of stepwise refinement. The software system structure is then defined at three levels—the system, the module, and the procedure. The system (or job) describes the highest level of program execution. The system is decomposed into modules. The module is composed of one or more procedures and data that persist between successive invocations of the module. The procedure is the lowest level of system decomposition, the executable unit of the stored program. Another important aspect of the design process is its documentation. There is a critical need to record the design as it is being developed, from its highest level all the way to its lowest level of detail, before its implementation in a programming language, using language that can be used not only to communicate software designs between specialists in software development but also between specialists and nonspecialists in rigorous logical terms. Important elements of the software development process are reviews, walk-throughs, and inspections, which can be applied to the products of the software-development process to ensure that they are complete, accurate, and consistent. They are applied to such areas as the design (design inspections), the software source code (code inspections), documentation, test designs, and test-results analysis. The goals of the software review and inspection process are to ensure that standards are met, to check on the quality of the product, and to detect and correct errors at the earliest possible point in the software life cycle. Another important value of the review process is that it permits progress against development-plan milestones to be measured more objectively and rework for error correction to be monitored more closely.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.84

SOFTWARE 18.84

DIGITAL COMPUTER SYSTEMS

TESTING An important activity in the software-development cycle that is often ignored until too late in the process is testing. It is also important to note what testing can and cannot do for the software product. Quality cannot be tested into the software; it must be designed into it. Test planning begins with the requirements analysis at the beginning of the project. Requirements should be testable; i.e., they should be stated in a form that permits the final product to be tested to assure that it satisfies the requirements. Test planning and test designs should be developed in parallel with the design of the software.

EXPERT SYSTEMS One important area of research in computer science has been that of artificial intelligence. The most successful application of artificial intelligence techniques has been in the development of expert systems, or knowledge-based systems, as they are often called. These are human-machine interactive systems with specialized problem-solving expertise that are used to solve complex problems in such specific areas as medicine, chemistry, mathematics, and engineering. This expertise consists of knowledge about the particular problem domain that the expert system is designed to support (e.g., diagnosis and therapy for infectious diseases) and planning and problem-solving rules for processes used to identify and solve the particular problem at hand. The two main elements of an expert system are its knowledge base, which contains the domain knowledge for the problem area being addressed, and the inference engine, which contains the general problem-solving knowledge. A key task in constructing an expert system is knowledge acquisition: the extraction and formulation of knowledge (facts, concepts, rules) from existing sources, with special attention paid to the experience of experts in the particular problem domain being addressed.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.85

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 18.5

DATABASE TECHNOLOGY

DATABASE OVERVIEW Around 1964 a new term appeared in the computer literature to denote a new concept. The term was “database,” and it was coined by workers in military information systems to denote collections of data shared by end users of time-sharing computer systems. The commercial data-processing world at the time was in the throes of “integrated data processing,” and quickly appropriated “database” to denote the data collection that results from consolidating the data requirements of individual applications. Since that time, the term and the concept have become firmly entrenched in the computer world. Today, computer applications in which users access a database are called database applications. The database management system, or DBMS, has evolved to facilitate the development of database applications. The development of DBMS, in turn, has given rise to new languages, algorithms, and software techniques, which together make up what might be called a database technology. An overview of a typical DBMS is shown in Fig. 18.5.1. Traditional data-processing applications used master files to maintain continuity between program runs. Master files “belonged to” applications, and the master files within an enterprise were often designed and maintained independently of one another. As a result, common data items often appeared in different master files, and the values of such items often did not agree. There was thus a requirement to consolidate the various master files into a single database, which could be centrally maintained and shared among various applications. Data consolidation was also required for the development of certain types of “management information” applications that were not feasible with fragmented master files. There was a requirement to raise the level of languages used to specify application procedures, and also to provide software for automatically transforming high-level specifications into equivalent low-level specifications. In the database context, this property of languages has come to be known as data independence. The consolidation of master files into databases had the undesirable side effect of increasing the potential for data loss and unauthorized data use. The requirement for data consolidation thus carried with it a requirement for tools and techniques to control the use of databases and to protect against their loss. A DBMS is characterized by its data-structure class, that is, the class of data structures that it makes available to users for the formulation of applications. Most DBMSs distinguish between structure instances and structure types, the latter being abstractions of sets of structure instances. A DBMS also provides an implementation of its data-structure class, which is conceptually a mapping of the structures of the class into the structures of a lower-level class. The structures of the former class are often referred to as logical structures, whereas those of the latter are called physical structures. The data-structure classes are early systems, were derived from punched-card technology, and thus tended to be quite simple. A typical class was composed of files of records of a single type, with the record type being defined by an ordered set of fixed-length fields. Because of their regularity, such files are now referred to as flat files. Database technology has produced a variety of improved data-structuring methods, many of which have been embodied in DBMS. Although many specific data-structure classes have been produced (essentially one class per system), these classes have tended to cluster into a small number of families, the most important of which are the hierarchic, the network, the relational, and the semantic families. These families have evolved more or less in the 18.85 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.86

DATABASE TECHNOLOGY 18.86

DIGITAL COMPUTER SYSTEMS

FIGURE 18.5.1 Overview of a typical database management system (DBMS).

order indicated, and all are represented in the data-structure classes of present-day DBMS. The use of large databases in distributed computing systems is sometimes called Data Repository or Data Warehouse.

HIERARCHIC DATA STRUCTURES The hierarchic data-structuring methods that began to appear in the early 1960s provided some relief for the entity association problem. These methods were developed primarily to accommodate the variability that frequently occurs in the records of a file. For example, in the popular two-level hierarchic method, a record was divided into a header segment and a variable number of trailer segments of one or more types. The header segment represented attributes common to all entities of a set, while the trailer segments were used for the variably occurring attributes. The method was also capable of representing one-many associations between two sets of entities, by representing one set as header segments and the other as trailers and thus provided a primitive tool for data consolidation. This two-level approach was expanded to n-level structures. These structures have also been implemented extensively on direct-access storage devices (DASD), which afford numerous additional representation possibilities. IMS was one of the first commercial systems to offer hierarchic data structuring and is often cited to illustrate the hierarchic structuring concept. The IMS equivalent of a file is the physical database, which consists of a set of hierarchical structured records of a single type. A record type is composed according to the following rules. The record type has a single type of root segment. The root segment type may have any number of child segment types. Each child of the root may also have any number of child segment types, and so on.

NETWORK DATA STRUCTURES The first network structuring method to be developed for commercial data processing had its origins in the billof-materials application, which requires the representation of many-many associations between a set of parts and itself; e.g., a given part may simultaneously act as an assembly of other parts and as a component of other parts.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.87

DATABASE TECHNOLOGY DATABASE TECHNOLOGY

18.87

This concept was used as the basis of a database language developed by the database task group (DBTG) of CODASYL in the late 1960s and early 1970s. This language introduced some new terminology and generalized some features, and has been implemented in a number of DBMSs.

RELATIONAL DATA STRUCTURES In the mid-1960s, a number of investigators began to grow dissatisfied with the hardware orientation of then extent data-structuring methods and, in particular, with the manner in which pointers and similar devices for implementing entity associations were being exposed to the users. These investigators sought a way of raising the perceived level of data structures and at the same time bringing them closer to the way in which people look at information. Their efforts resulted in an entity-set-structuring method, wherein information is represented in a set of tables, with each table corresponding to a set of entities of a single type. The rows of a table correspond to the entities in the set, and the columns correspond to the attributes that characterize the entity set type. Tables can be used to represent associations among entities. In this case, each row corresponds to an association, and the columns correspond to entity identifiers, i.e., entity attributes that can be used to uniquely identify entities. Additional columns may be used to record attributes of the association itself (as opposed to attributes of the associated entities). The key new concepts in the entity set methods were the simplicity of the structures it provided and the use of entity identifiers (rather than pointers or hardware-dictated structures) for representing entity associations. These concepts represented a major step forward in meeting the general goal of data independence. Codd (1971) noted that an entity set could be viewed as a mathematical relation on a set of domains, where each domain corresponds to a different property of the entity set. Associations among entities could be similarly represented, with the domains in this case corresponding to entity identifiers. Codd defined a (data) relation to be a time-varying subset of the cartesian product of the members of the set of domain n-tuples (or simply tuples). Codd proposed that relations be built exclusively on domains of elementary values—integers, character strings, and so forth. He called such relations normalized relations and the process of converting relations to normalized form, normalization. Virtually all work done with relations has been with normalized relations. Codd postulated levels of normalization called normal forms. An unconstrained normalized relation is in first normal form (1NF). A relation in 1NF in which all nonkey domains are functionally dependent on (i.e., have their values determined by) the entire key are in second normal form (2NF), which solves the problem of parasitic entity representation. A relation in 2NF in which all nonkey domains are dependent only on the key is the third normal form (3NF), which solves the problem of masquerading entities. As part of the development of the relational method, Codd postulated a relational algebra, i.e., a set of operations on relations that is closed in the sense of a traditional algebra, and thereby provided an important formal vehicle for carrying out a variety of research in data structures and systems. In addition to the conventional set operations, the relational algebra provides such operations as restriction, to delete selected tuples of a relation; projection, to delete selected domains of a relation; and join, to join two relations into one. Codd also proposed a relational calculus, whose distinguishing feature is the method used to designate sets of tuples. The method is patterned after the predicate calculus and makes use of free and bound variables and the universal and existential quantifiers. Codd characterized his methodology as a data model, and thereby provided a concise term for an important but previously unarticulated database concept, namely, the combination of a class of data structures and the operation allowed on the structures of the class. (A similar concept, the abstract data type or data abstraction, has evolved elsewhere in software technology.) The term “model” has been applied retroactively to early datastructuring methods, so that, for example, we now speak of hierarchic models and network models, as well as the relational model. The term is now generally used to denote an abstract data-structure class, although there is a growing realization that it should embrace operations as well as structures. The relational model has been implemented in a number of DBMSs.

SEMANTIC DATA STRUCTURES During the evolution of the hierarchic, network, and relational methods, it gradually became apparent that building a database was in fact equivalent to building a model of an enterprise and that databases could be developed more or less independently of applications simply by studying the enterprise. The notion of a conceptual schema

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.88

DATABASE TECHNOLOGY 18.88

DIGITAL COMPUTER SYSTEMS

for the application-independent modeling of an enterprise and various external schemata derivable from the conceptual schema for expressing data requirements of specific applications is the outgrowth of this view. Application-independent modeling has produced a spate of semantic data models and debate over which of these is best for modeling “reality.” One of the most successful semantic models is the entity-relationship model, which provides data constructs at two levels: the conceptual level, whose constructs include entities, relationships (n-ary associations among entities), value sets, and attributes; and the representation level, in which conceptual constructs are mapped into tables.

DATA DEFINITION AND DATA-DEFINITION LANGUAGES The history of computer applications has been marked by a steady increase in the level of the language used to implement applications. In database technology, this trend is manifested in the development of high-level datadefinition languages and data-manipulation languages. A data-definition language (DDL) provides the DBMS user with a way to declare the attributes of structure types within the database, and thus enable the system to perform implicitly many operations (e.g., name resolution, data-type checking) that would otherwise have to be invoked explicitly. A DDL typically provides for the definition of both logical and physical data attributes as well as for the definition of different views of the (logical) data. The latter are useful in limiting or tailoring the way in which specific programs or end users look at the database. A data-manipulation language (DML) provides the user with a way to express operations on the data structure instances of a database, using names previously established through data definitions. Data-manipulation facilities are of two general types: host-language and self-contained. A host-language facility permits the manipulation of databases through programs written in conventional procedural languages such as COBOL or PL/I. It provides statements that the user may embed in a program at the points were database operations are to be performed. When such a statement is encountered control is transferred to the database system, which performs the operation and returns the results (data and return codes) to the program in prearranged main-storage locations. A self-contained facility permits the manipulation of the database through a high-level, nonprocedural language, which is independent of any procedural language, i.e., whose language is self-contained. An important type of self-contained facility is the query facility, which enables “casual” users to access a database without the mediation of a professional programmer. Other types of self-contained facility are available for performing generalizable operations on data, such as sorting report generation, and data translation.

REPORT PROGRAM GENERATORS The use of fixed files for reporting is found in the report program generator, a software package intended primarily for the production of reports from formatted files. Attributes of the source files and the desired reports are described by the user in a simple declarative language, and this description is then processed by a compiler to generate a program that, when run, produces the desired reports. A key concept of the report program generator is the use of a fixed structure for the generated program, consisting of input, calculation, and output phases. Such a structure limits the transformations that can be carried out with a single generated program, but it has nevertheless proved to be remarkably versatile. (Report program generators are routinely used for file maintenance as well as for report generation.) The fixed structure of the generated program imposes a discipline on the user, which enables the user to produce a running program much more quickly than could otherwise be done with conventional languages. Report program generators were especially popular in smaller installations where conventional programming talent is scarce, and in some installations it was the only “programming language” used. The report program generators and the formatted file systems were the precursors of the contemporary DBMS query facility. A query processor is in effect a generalized routine that is particularized to a specific application (i.e., the user’s query) by the parameters (data names, Boolean predicates, and so forth) appearing in the query. Query facilities are more advanced than most early generalized routines in that they provide online (as opposed to batch) access to databases (as opposed to individual files). The basic concept is unchanged, however, and the lessons learned in implementing the generalized routines, and especially in reconciling ease of use with acceptable performance, have been directly applicable to query-language processors.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.89

DATABASE TECHNOLOGY DATABASE TECHNOLOGY

18.89

PROGRAM ISOLATION Most DBMSs permit a database to be accessed concurrently by a number of users. If this access is not controlled, the consistency of the data can be compromised (e.g., lost updates) or the logic of programs can be affected (e.g., nonrepeatable read operations). With program isolation, records are locked for a program upon updating any item within the record and unlocked when the program reaches a synchpoint, i.e., a point at which the changes made by the program are committed to the database. Deadlocks can occur and are resolved by selecting one of the deadlocked programs and restarting it at its most recent synchpoint.

AUTHORIZATION Consolidated data often constitute sensitive information, which the user may not want divulged to other than authorized people, for reasons of national security, competitive advantage, or personal privacy. DBMSs, therefore, provide mechanisms for limiting data access to properly authorized persons. A system administrator can grant specific capabilities with respect to specific data objects to specific users. Grantable capabilities with respect to relations include the capability to read from the relation, to insert tuples, to delete tuples, to update specific fields, and to delete the relation. The holder of a capability may also be given authority to grant that capability to others, so that authorization tasks may be delegated to different individuals within an organization.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.90

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 18.6

ADVANCED COMPUTER TECHNOLOGY

BACKGROUND Early computer systems were tightly integrated with locally attached I/O facilities, such as card readers and punches, tape readers and punches, tape drives, and discs. Data had to be moved to and from the site of the programming system physically. For certain systems, the turnaround time in such processes was considered excessive and it was not long before steps were taken to interconnect remotely located I/O devices to a computer site via telephone and telegraph facilities. Telecommunications today means any form of communicating that exists to support the exchange of information between people and application programs in one of the three following combinations: person to person, person to process, or process to process. Thus, today, telecommunications encompasses telephony, facsimile transmission, television, and data communications. The impetus for this change has been continual advancement in digital technology, a common technological thread running through all the previously mentioned disciplines.

TERMINALS Terminals are the mechanisms employed by individuals to interface to communications mechanisms and to represent them in their information exchange with other individuals or processes. Since the early advent of telecommunications, many terminal device types have come in the market. The earliest and most popular devices were the keyboard printers made by Teletype Corporation and other manufacturers. These electromechanical machines operated at speeds from 5 to 11 characters per second. Another popular type of terminal device is the cathode-ray-tube (CRT) visual display and keyboard. This dumb terminal operates at speeds from 1200 to 9600 b/s across common-carrier telephone facilities or at megabit rates directly attached to the local channel of a processor. Displays generally are alphanumeric or graphic, the preponderance of terminals being alphanumeric. Many such displays can be attached to a single control unit, or node, which can greatly reduce the overall costs of the electronic devices required to drive the multiple devices. The control unit can also reduce communications cost by concentrating the multiple-terminal data traffic from the locally connected drives into a single data stream and sharing a single channel between the controller and a host processor. Some controllers support the attachment of hard-copy printers and have sufficient logic to permit printout of information of a display screen without communicating with a host computer. Displays are generally used in conversational or interactive modes of operation. In recent years PCs have had a profound impact on the quantity of this type of display. Although the personal computer really is an intelligent workstation, most PCs can employ communications software subsystems to emulate nonintelligent visual display and keyboard devices. In this way, they can communicate with host software. 18.90 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.91

ADVANCED COMPUTER TECHNOLOGY ADVANCED COMPUTER TECHNOLOGY

18.91

Many specialized transaction-type terminals are designed for specific industry applications and form another category of terminals. Special point-of-sale terminals for the retail industry, bank teller and cash dispensers, and airline ticketing terminals are examples of transaction-type terminals. A category of terminals that incorporates many of the features of previous categories, the intelligent workstation, is really a remote mini- or microcomputer system that permits the distribution of application program functions to remote sites. Such intelligent terminals are capable of supporting a diversity of functions from interactive processing to remote batched job entry. Some of these terminals contain such a high level of processing capability that they are, in fact, true host processors. With the expanded scope of telecommunications, television and telephones are beginning to play an important role as terminals. Not that they were not terminals before, but rather, they are now recognized as an important part of the terminal types to be found in the world of information. Terminals now support voice, data, and video communications within one single integrated package.

HOSTS When people communicate with machines via their terminals, they are communicating with application programs. These programs offer services to the user, such as data processing (computation, word processing, editing, and so forth) and data storage or retrieval. In such applications as message switching and conferencing, hosts often act as intermediates between two terminals. Thus they may contain application software that stores a sender’s message until the designated receiver actively connects to the message application or to some program that has been made aware of the person’s relationship to the host service. Host applications also may have a need to cooperate with other host applications in distributed processors. Distributed processing and distributed database applications typically involve host-to-host communications. Host-to-host communications typically require much higher-speed communication channels than terminal-tohost traffic. Within recent years, host channel speeds have increased to tens of megabits per second, commensurate with the increased millions-of-instructions-per-second (MIPS) rate of processors. Personal computers can also assume the role of a host. The difference between being a terminal or a host is defined by the telecommunications software supporting the PC. Acceptance of this statement should be easy when one recognizes that PCs today come with the processing power and storage capacity attributable to what was once viewed as a sizable host processor.

COMMUNICATIONS SYSTEMS For communications to occur between end users, it is essential that two aspects be present: (1) There must be interconnectivity between the communicating parties to permit the transference of signals and bits that represent the information transfer, and (2) there must be commonality of representation and interpretation of the bits that represent the information. In the early years of telecommunications, the world of connectivity was divided and classified as local and remote. Local implied limited distances from a host, usually with relatively high-speed connections (channel speeds). Terminals were connected to controllers that concentrated the data traffic. Remote connections depended on the services available from common carriers. These worlds were separate and distinct. Evolving standards obscure the earlier distinction between local and remote and introduce the new concept of networking. In the telecommunications world, networking is the interconnection and interoperability of systems for the purpose of providing communications service. The types of communications services to be found in modern networks vary with the needs of subscribers and the service characteristics of the media available. The end users impose requirements in the form of service classifications. Examples of requirements are capacity, integrity, delay characterizations, acceptable cost, security, and connectivity. Each medium imposes constraints for which the systems may have to compensate. Thus a medium may have a limited capacity, it may be susceptible to error, it may introduce a propagation delay, it may also lack security, and it may limit connectivity.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.92

ADVANCED COMPUTER TECHNOLOGY 18.92

DIGITAL COMPUTER SYSTEMS

FIGURE 18.6.1a

ISO-OSI model.

Most early communications networks were designated to meet specific subscriber needs, as was the case with the early telephone system, airline reservation systems, and the early warning system. Computer and transmission technology have changed this concept. The introduction of digital technology introduces a common denominator to the field of information communications that eases the sharing of resources used in the creation of communications networks. However, the existence of millions of terminals and tens of thousands of processors designed to operate within the construct of their own private communications world, and the need for innovation to accommodate new and heretofore unanticipated subscriber needs, presents a formidable challenge to the information communications industry. In an attempt to bring about order, the International Standards Organization (ISO) has developed standards recommendations for an open system. An open system is one that conforms to the standards defined by ISO and thus facilitates connectivity and interoperability between systems manufactured by different vendors. Figure 18.6.1 depicts the ISO open system interconnect (OSI) model.

OSI REFERENCE MODEL The OSI reference model describes a communication system in the seven hierarchic layers shown in Fig. 18.6.1. Each layer provides services to the layer above and invokes services from the layer below. Thus the end users of a communications system interconnect to the application layer, which provides the user interface and interprets user service requests. This layer is often thought of as a distributed operating system, because it supports the interconnectivity and communicability between end users that are distributed. The model is a general one and even permits two end users to use the application layer interface of a common system to communicate, instead of a shared local operating system. By hiding the difference between locally connected and remotely connected end users, the interconnected and interrelated application layer entities assume the role of a global operating system, as shown in Fig. 18.6.2. In a single system, the operating system contains supervisory control logic for the resources—both logical and physical—that are allocated to provide services to the end user. In a distributed system, such as a communications system, the global supervisory service for all the layers resides in the application layer, hence, the reason to view the layer as a global operating system. Note that the model applies equally as well to telephony. An end user, in requesting a connection service from the communications system, communicates with a supervisory controller. This controller is in the local

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.93

ADVANCED COMPUTER TECHNOLOGY ADVANCED COMPUTER TECHNOLOGY

FIGURE 18.6.1b

18.93

ISO-OSI model with an intermediate node.

exchange of a telephone operating company. The cooperating efforts of the distributed supervisory control components of the telephone systems are an analog of the distributed application layers of a data network, as shown in Fig. 18.6.1b. A user views the application layer as a global server. Requests for service are passed from the user to the communications system through an interface that typically has been defined by the serving system. Even though systems differ in their programs and hardware architectures, there is a growing recognition of the necessity for the introduction of standardization at this critical interface. Consider the difficulty in making a telephone call when traveling, if every telephone company had chosen a different interface mechanism. This would require a knowledge of all the different methods, as is the case for persons who travel to foreign countries. The services available from the communications systems can be quite varied. In telephony, a basic transport and connection service is prevalent. However, as greater intelligence was placed within the systems of

FIGURE 18.6.2 Application layers as a global operating system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.94

ADVANCED COMPUTER TECHNOLOGY 18.94

DIGITAL COMPUTER SYSTEMS

FIGURE 18.6.3 Example use of the application layer.

the telephone networks, the operating companies have increased the types of services they provide. Systems are operational in the United States, which integrate voice and data information through a common interface mechanism and across a shared medium. That medium is the existing twisted-pair wire that once carried only analog voice signals, or optical lines, which carry a much greater volume of data. Carrier networks offer electronic mail, packet switching, telemetry services, videotex, and many other services. The limit on what can or will be provided will be determined by either government regulations or market demand. The layers that lie below the application layer exist to support communications between end users or the communications services of an application layer that exist in a distributed system, as is shown in Fig. 18.6.3. The application layer of system A can contain knowledge of the electronic mail server in system C and, on behest of end user A (EUA), invoke that service. The application layer of system A can also support a connect service that enables it to create a communications path between system A and its equivalent partner (functional pair) in system B, for the purpose of supporting communications between EUA and EUB. Each of the layers of the OSI model in Fig. 18.6.1 contributes some value to the communications service between communicating partners, be they end users or communication systems distributed entities. The application layer is a user of the presentation service layer and is concerned with the differences that exist in the various processors and operating systems in which each of the distributed communications systems is implemented. The presentation service layer provides the service to overcome differences in coded representation, format, and the presentations of information. To use an analogy, if one system’s machine talked and understood Greek and another system’s machine talked and understood Latin, the presentation services layer would perform the necessary service to permit the comprehension of information exchanged between the two systems. The presentation service layer is a user of the session layer which manages the dialogue between two communicating partners. The session layer assures that the information exchange conforms to the rules necessary to satisfy the end user needs. For example, if the exchange is to be by the two-way alternate mode, the session layer monitors and enforces this mode of exchange. It regulates the rate of flow to end users and assures that information is delivered in the same form and sequence in which it had been transmitted, if that had been a requirement. It is the higher layer’s port into the transmission subsystem that encompasses the lower four layers of the OSI reference model in Fig. 18.6.1a. The session layer is the user of the transport layer, which creates a logical pipe between the session layer of its system and that of any other system. The transport layer is responsible for selecting the appropriate lower-layer network to meet the service requirement of the session-layer entities, and where necessary, to enhance lower-level network services by providing system end-to-end integrity, information resequencing, and system-to-system flow control to assure no underrun or overrun of the receiving system’s resources. The transport layer uses the network layer to create a logical path between two systems. Systems may be attached to each other in many different ways. The simplest way is through a single point-to-point connection. The network layer in such an example is very simple. However, witness the topology depicted in Fig. 18.6.4. To go from system A to system B involves passing through several different networks, each of which exhibits

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.95

ADVANCED COMPUTER TECHNOLOGY ADVANCED COMPUTER TECHNOLOGY

18.95

FIGURE 18.6.4 Creating a logical path between two systems.

different characteristics of transmission service and each of which may differ radically in its software and hardware design, topology, and protocol (rules of operation). Such a network interconnection is rather typical. In fact, many interconnections can be far more complex. Consider the effect of the inclusion of an alternate packet-switch network PSB between local area network (LAN) A and PSA. The logic to decide which network or sequence of networks to employ must be dealt with within the network layer. This function is referred to as routing. However, to hide the presence of network complexity from the transport layer and thus provide the appearance of a single network, a unifying sublayer, called the internet layer, is inserted between the transport layer and subnetworks, as shown in Fig. 18.6.5. Because networks differ with regard to the size of data units they can handle, the network layer must deal with the breaking of information frames into the size required by individual subnetworks within the path. The network layer must ultimately be capable of reassembling the information frames at the target system before passing them to that system’s transport layer. The network layer must also address the problem of congestion caused by network sources being shared by large numbers of users attached to different systems. Because the end users of these

FIGURE 18.6.5 Network layer structure.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_18.qxd

10/28/04

11:12 AM

Page 18.96

ADVANCED COMPUTER TECHNOLOGY 18.96

DIGITAL COMPUTER SYSTEMS

different systems are independent of each other, it is possible they may all attempt to access and use the same network resources simultaneously. Congestion can result in deadlock, and such an occurrence can bring a network to a halt. Thus network layers must offer means of deadlock avoidance or detection and correction. The network layer is a user of the link-control layer, which is responsible for building a point-topoint connection between two system nodes that share a common communications circuit. Link control is aware only of its neighboring node(s) on the FIGURE 18.6.6 A new link control for each new channel. shared channel. Each new circuit requires a new link control, as illustrated in Fig. 18.6.6. It is possible to hide the existence of multiple circuits between two nodes by the inclusion of a multilink sublayer above the individual link-control layers. Link control performs such functions as error detection and correction, framing, flow-control sequencing, and channel-access control. Link control is the user of the physical layer. The physical layer is responsible for transforming the information frame into a form suitable for transmission onto the medium. Thus its major function is signaling, i.e., putting information onto the medium, removing it, and retransforming it into the code structure understandable by the link control.

REAL SYSTEMS It is important to understand that OSI describes a layered model that allocates functions necessary in the process of communications to each of the layers. However, the actual implementation of any system to attain conformance with the model is left to individual system vendors. Thus, whether there are layers and where a function is actually executed within a single system are matters beyond the scope of ISO. The test of conformance is the successful interoperability between systems. However, one can use the model to place in proper context existing communications protocols, associated entities, services, and products. For example, modems are entities of the physical layer. A data service unit is another such entity, and LAN adapter is still another. For the network layer, the list is still growing. There are many different subnetworks, each with its own routing schemes to effect the switching of information through the nodes of its unique communications subnetwork. Thus, when one looks at some of the network service providers, such as Telenet or Tymnet, different logical and physical entities performing the same services are observed. Agreement exists among vendors and carriers that if systems are to interconnect, there is a need for a unifying network address structure to avoid ambiguity in identifying systems. There must also be a way of shielding the transport layer entities from various differences within the possible networks that could be involved in achieving end-to-end communications. Such shielding is implemented in the protocols of the Internet and associated software, such as TCP/IP.

PACKET SWITCH A packet switch is designed to provide the three lower-layer services of the OSI model to its subscribers (physical, link, and network). The interface to access these services is X.25. (See Fig. 18.6.1b). When one observes the implementation of various vendor packet switches, it is difficult to see the layers. Most vendors tend to integrate their layers and thus do not clearly define the interfaces that would demonstrate the layer-to-layer independence required. However, because most packet-switched networks are in themselves closed systems, and each generally has its own unique routing algorithms and congestion and flow-control mechanisms, OSI is concerned only with conformance at the top-layer service interface (X.25) and at the gateway between different packet networks (X.75).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 19

CONTROL SYSTEMS Control is used to modify the behavior of a system so it behaves in a specific desirable way over time. For example, we may want the speed of a car on the highway to remain as close as possible to 60 miles per hour in spite of possible hills or adverse wind; or we may want an aircraft to follow a desired altitude, heading, and velocity profile independent of wind gusts; or we may want the temperature and pressure in a reactor vessel in a chemical process plant to be maintained at desired levels. All these are being accomplished today by control methods and the above are examples of what automatic control systems are designed to do, without human intervention. Control is used whenever quantities such as speed, altitude, temperature, or voltage must be made to behave in some desirable way over time. This section provides an introduction to control system design methods. D.C.

In This Section: CHAPTER 19.1 CONTROL SYSTEM DESIGN INTRODUCTION MATHEMATICAL DESCRIPTIONS ANALYSIS OF DYNAMICAL BEHAVIOR CLASSICAL CONTROL DESIGN METHODS ALTERNATIVE DESIGN METHODS ADVANCED ANALYSIS AND DESIGN TECHNIQUES APPENDIX: OPEN- AND CLOSED-LOOP STABILIZATION REFERENCES ON THE CD-ROM

On the CD-ROM: “A Brief Review of the Laplace Transform Useful in Control Systems,” by the authors of this section, examines its usefulness in control systems analysis and design.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

19.3 19.3 19.4 19.10 19.14 19.21 19.26 19.27 19.29 19.29

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.2

CONTROL SYSTEMS

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 19.1

CONTROL SYSTEM DESIGN Panos Antsaklis, Zhiqiang Gao

INTRODUCTION To gain some insight into how an automatic control system operates we shall briefly examine the speed control mechanism in a car. It is perhaps instructive to consider first how a typical driver may control the car speed over uneven terrain. The driver, by carefully observing the speedometer, and appropriately increasing or decreasing the fuel flow to the engine, using the gas pedal, can maintain the speed quite accurately. Higher accuracy can perhaps be achieved by looking ahead to anticipate road inclines. An automatic speed control system, also called cruise control, works by using the difference, or error, between the actual and desired speeds and knowledge of the car’s response to fuel increases and decreases to calculate via some algorithm an appropriate gas pedal position, so to drive the speed error to zero. This decision process is called a control law and it is implemented in the controller. The system configuration is shown in Fig. 19.1.1. The car dynamics of interest are captured in the plant. Information about the actual speed is fed back to the controller by sensors, and the control decisions are implemented via a device, the actuator, that changes the position of the gas pedal. The knowledge of the car’s response to fuel increases and decreases is most often captured in a mathematical model. Certainly in an automobile today there are many more automatic control systems such as the antilock brake system (ABS), emission control, and tracking control. The use of feedback control preceded control theory, outlined in the following sections, by over 2000 years. The first feedback device on record is the famous Water Clock of Ktesibios in Alexandria, Egypt, from the third century BC.

Proportional-Integral-Derivative Control The proportional-integral-derivative (PID) controller, defined by u = k P e + k I ∫ e + kd e

(1)

is a particularly useful control approach that was invented over 80 years ago. Here kP, kI, and kd are controller parameters to be selected, often by trial and error or by the use of a lookup table in industry practice. The goal, as in the cruise control example, is to drive the error to zero in a desirable manner. All three terms in Eq. (1) have explicit physical meanings in that e is the current error, ∫e is the accumulated error, and e˙ represents the trend. This, together with the basic understanding of the causal relationship between the control signal (u) and the output ( y), forms the basis for engineers to “tune,” or adjust, the controller parameters to meet the design specifications. This intuitive design, as it turns out, is sufficient for many control applications. To this day, PID control is still the predominant method in industry and is found in over 95 percent of industrial applications. Its success can be attributed to the simplicity, efficiency, and effectiveness of this method. 19.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.4

CONTROL SYSTEM DESIGN 19.4

CONTROL SYSTEMS

FIGURE 19.1.1 Feedback control configuration with cruise control as an example.

The Role of Control Theory To design a controller that makes a system behave in a desirable manner, we need a way to predict the behavior of the quantities of interest over time, specifically how they change in response to different inputs. Mathematical models are most often used to predict future behavior, and control system design methodologies are based on such models. Understanding control theory requires engineers to be well versed in basic mathematical concepts and skills, such as solving differential equations and using Laplace transform. The role of control theory is to help us gain insight on how and why feedback control systems work and how to systematically deal with various design and analysis issues. Specifically, the following issues are of both practical importance and theoretical interest: 1. Stability and stability margins of closed-loop systems. 2. How fast and how smoothly the error between the output and the set point is driven to zero. 3. How well the control system handles unexpected external disturbances, sensor noises, and internal dynamic changes. In the following, modeling and analysis are first introduced, followed by an overview of the classical design methods for single-input single-output plants, design evaluation methods, and implementation issues. Alternative design methods are then briefly presented. Finally, for the sake of simplicity and brevity, the discussion is restricted to linear, time invariant systems. Results maybe found in the literature for the cases of linear, time-varying systems, and also for nonlinear systems, systems with delays, systems described by partial differential equations, and so on; these results, however, tend to be more restricted and case dependent.

MATHEMATICAL DESCRIPTIONS Mathematical models of physical processes are the foundations of control theory. The existing analysis and synthesis tools are all based on certain types of mathematical descriptions of the systems to be controlled, also called plants. Most require that the plants are linear, causal, and time invariant. Three different mathematical models for such plants, namely, linear ordinary differential equations, state variable or state space descriptions, and transfer functions are introduced below.

Linear Differential Equations In control system design the most common mathematical models of the behavior of interest are, in the time domain, linear ordinary differential equations with constant coefficients, and in the frequency or transform domain, transfer functions obtained from time domain descriptions via Laplace transforms. Mathematical models of dynamic processes are often derived using physical laws such as Newton’s and Kirchhoff’s. As an example consider first a simple mechanical system, a spring/mass/damper. It consists of a weight m on a spring with spring constant k, its motion damped by friction with coefficient f (Fig. 19.1.2).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.5

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

FIGURE 19.1.2 Spring, mass, and damper system.

19.5

FIGURE 19.1.3 RLC circuit.

If y(t) is the displacement from the resting position and u(t) is the force applied, it can be shown using Newton’s law that the motion is described by the following linear, ordinary differential equation with constant coefficients: f k 1 y˙(t ) + y(t ) = u(t ) m m m

˙˙ y( t ) + ∆

where y˙(t ) = dy(t ) /dt with initial conditions y(t )

t =0

= y(0) = y0

and

dy(t ) dt

t =0

=

dy(0) = y(0) = y1 dt

Note that in the next subsection the trajectory y(t) is determined, in terms of the system parameters, the initial conditions, and the applied input force u(t), using a methodology based on Laplace transform. The Laplace transform is briefly reviewed in the CD-ROM. For a second example consider an electric RLC circuit with i(t) the input current of a current source, and v(t) the output voltage across a load resistance R (Fig. 19.1.3). Using Kirchhoff’s laws one may derive: v˙˙(t ) +

R R 1 v˙(t ) + v( t ) = i(t ) L LC LC

which describes the dependence of the output voltage v(t) to the input current i(t). Given i(t) for t ≥ 0, the initial values v(0) and v˙ (0) must also be given to uniquely define v(t) for t ≥ 0. It is important to note the similarity between the two differential equations that describe the behavior of a mechanical and an electrical system, respectively. Although the interpretation of the variables is completely different, their relations are described by the same linear, second-order differential equation with constant coefficients. This fact is well understood and leads to the study of mechanical, thermal, fluid systems via convenient electric circuits. State Variable Descriptions Instead of working with many different types of higher-order differential equations that describe the behavior of the system, it is possible to work with an equivalent set of standardized first-order vector differential equations that can be derived in a systematic way. To illustrate, consider the spring/mass/damper example. Let x1(t) = y(t), x2(t) = y˙ (t) be new variables, called state variables. Then the system is equivalently described by the equations x1 (t ) = x 2 (t ) and

x 2 (t ) =

−f 1 k x (t ) − x1 (t ) + u(t ) m 2 m m

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.6

CONTROL SYSTEM DESIGN 19.6

CONTROL SYSTEMS

with initial conditions x1(0) = y0 and x2(0) = y1. Since y(t) is of interest, the output equation y(t) = x1(t) is also added. These can be written as  x1 (t )   0 1   x1 (t )   0   = +   u(t )  x 2 (t )   − k/m − f /m   x 2 (t )   1/m   x (t )  y(t ) = [1 0]  1   x 2 (t )  which are of the general form x˙ (t ) = Ax (t ) + Bu(t ),

y(t ) = Cx (t ) + Du(t )

Here x(t) is a 2 × 1 vector (a column vector) with elements the two state variables x1(t) and x2(t). It is called the state vector. The variable u(t) is the input and y(t) is the output of the system. The first equation is a vector differential equation called the state equation. The second equation is an algebraic equation called the output equation. In the above example D = 0; D is called the direct link, as it directly connects the input to the output, as opposed to connecting through x(t) and the dynamics of the system. The above description is the state variable or state space description of the system. The advantage is that system descriptions can be written in a standard form (the state space form) for which many mathematical results exist. We shall present a number of them in this section. A state variable description of a system can sometimes be derived directly, and not through a higher-order differential equation. To illustrate, consider the circuit example presented above, using Kirchhoff’s current law ic = C

dvc = i − iL dt

and from the voltage law L

diL = − RiL + vc dt

If the state variables are selected to be x1 = vc, x2 = iL, then the equations may be written as  x1   =  x2 

 0 −1/C   x1   1/ C     +  i  1/L − R /L   x 2   0 

x  v = [ 0 R]  1   x2  where v = RiL = Rx2 is the output of interest. Note that the choice of state variables is not unique. In fact, if we start from the second-order differential equation and set x1 = v and x2 = v˙, we derive an equivalent state variable description, namely,  x1   0 1   x1   0   =  +  i  x 2   −1/ LC − R / L   x 2   R / LC  x  v = [1 0]  1   x2  Equivalent state variable descriptions are obtained by a change in the basis (coordinate system) of the vector state space. Any two equivalent representations x = Ax + Bu,

y = Cx + Du and

x = Ax + Bu,

y = Cx + Du

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.7

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.7

are related by A = PAP −1, B = PB, C = CP −1, D = D , and x = Px where P is a square and nonsingular matrix. Note that state variables can represent physical quantities that may be measured, for instance, x1 = vc voltage, x2 = iL current in the above example; or they can be mathematical quantities, which may not have direct physical interpretation. Linearization. The linear models studied here are very useful not only because they describe linear dynamical processes, but also because they can be approximations of nonlinear dynamical processes in the neighborhood of an operating point. The idea in linear approximations of nonlinear dynamics is analogous to using Taylor series approximations of functions to extract a linear approximation. A simple example is that of a simple pendulum x1 = x 2 , x 2 = − k sin x1, where for small excursions from the equilibrium at zero, sin x1 is approximately equal to x1 and the equations become linear, namely, x1 = x 2 , x 2 = − kx1.

FIGURE 19.1.4 The transfer function model.

Transfer Functions The transfer function of a linear, time-invariant system is the ratio of the Laplace transform of the output Y(s) to the Laplace transform of the corresponding input U(s) with all initial conditions assumed to be zero (Fig. 19.1.4). From Differential Equations to Transfer Functions.

Let the equation

d 2 y( t ) dy(t ) + a1 + a0 y(t ) = b0u(t ) dt 2 dt with some initial conditions y(t )

t =0

= y(0) and

dy(t ) dt

t =0

=

dy(0) = y (0) dt

describe a process of interest, for example, a spring/mass/damper system; see previous subsection. Taking the Laplace transform of both sides we obtain [s 2Y (s) − sy(0) − y(0)] + a1[sY (s) − y(0)] + a0Y (s) = b0U (s) where Y(s) = L{y(t)} and U(s) = L{u(t)}. Combining terms and solving with respect to Y(s) we obtain: Y (s ) =

b0 s + a1s + a0 2

U (s ) +

(s + a1 ) y(0) + y(0) s 2 + a1s + a0

Assuming the initial conditions are zero, Y ( s)/U ( s) = G( s) =

b0 s 2 + a1s + a0

where G(s) is the transfer function of the system defined above. We are concerned with transfer functions G(s) that are rational functions, that is, ratios of polynomials in s, G(s) = n(s)/d(s). We are interested in proper G(s) where lim G(s) < ∞. Proper G(s) have degree n(s) ≤ degree d(s). s→∝

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.8

CONTROL SYSTEM DESIGN 19.8

CONTROL SYSTEMS

In most cases degree n(s) < degree d(s), in which case G(s) is called strictly proper. Consider the transfer function G (s ) =

bm s m + bm−1s m−1 + s + an−1s n

n −1

+

+ b1s + b0 + a1s + a0

with m ≤ n

Note that the system described by this G(s) (Y(s) = G(s)U(s)) is described in the time domain by the following differential equation: y(n)(t) + a y(n−1)(t) + … + a y(1)(t) + a y(t) = b u(m)(t) + … + b u(1)(t) + b u(t) n−1

1

0

m

1

0

where y(n)(t) denotes the nth derivative of y(t) with respect to time t. Taking the Laplace transform of both sides of this differential equation, assuming that all initial conditions are zero, one obtains the above transfer function G(s). From State Space Descriptions to Transfer Functions. Consider x˙ (t ) = Ax (t ) + Bu(t ), y(t ) = Cx (t ) + Du(t ) with x(0) initial conditions; x(t) is in general an n-tuple, that is, a (column) vector with n elements. Taking the Laplace transform of both sides of the state equation: sX(s) – x(0) = AX(s) + BU(s) or (sIn – A)X(s) = BU(s) + x(0) where In is the n × n identity matrix; it has 1 on all diagonal elements and 0 everywhere else, e.g., 1 0  I2 =   0 1  Then X(s) = (sIn − A)−1BU(s) + (sIn − A)−1x(0) Taking now the Laplace transform on both sides of the output equation we obtain Y(s) = CX(s) + DU(s). Substituting we obtain, Y(s) = [C(sIn − A)−1 B + D]U(s) + C(sI − A)−1x(0) The response y(t) is the inverse Laplace of Y(s). Note that the second term on the right-hand side of the expression depends on x(0) and it is zero when the initial conditions are zero, i.e., when x(0) = 0. The first term describes the dependence of Y(s) on U(s) and it is not difficult to see that the transfer function G(s) of the systems is G(s) = C(sIn − A)−1 B + D Consider the spring/mass/damper example discussed previously with state variable description x˙ = Ax + Bu, y = Cx. If m = 1, f = 3, k = 2, then

Example

 0 1  A= , B =  −2 −3 

0   , C = [1 0] 1

and its transfer function G(s) (Y(s) = G(s)U(s)) is s 0  0 1  G (s) = C (sI 2 − A)−1 B = [1 0]    −   0 s   −2 −3    s −1   = [1 0]   2 s + 3  1 = 2 s + 3s + 2

−1

−1

0    1 

s + 3 10  0 1      = [1 0] 2 s + 3s + 2  −2 s   1   1 

as before.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.9

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.9

Using the state space description and properties of the Laplace transform an explicit expression for y(t) in terms of u(t) and x(0) may be derived. To illustrate, consider the scalar case z˙ = az + bu with z(0) initial condition. Using the Laplace transform: Z (s ) =

1 b z (0) + U (s ) s−a s−a

from which z (t ) = L−1{Z (s)} = e at z (0) +

t

∫ 0 ea(t −τ ) bu(τ )dτ

Note that the second term is a convolution integral. Similarly in the vector case, given x (t ) = Ax (t ) + Bu(t ),

y(t ) = Cx (t ) + B(t )u(t )

it can be shown that x (t ) = e At x (0) +

t

∫ 0 e A(t −τ ) Bu(τ )dτ

and y(t ) = Ce At x (0) +

t

∫ 0 Ce A(t −τ ) Bu(τ )dτ + Du(t )

Notice that eAt = L-1{(sI − A)−1}. The matrix exponential eAt is defined by the (convergent) series e At = I + e At +

A2 t 2 + 2!

+

Ak t k + k!



tk k A k =1 k !

= I +∑

Poles and Zeros. The n roots of the denominator polynomial d(s) of G(s) are the poles of G(s). The m roots of the numerator polynomial n(s) of G(s) are (finite) zeros of G(s). Example

(Fig. 19.1.5) G( s ) =

s+2 s+2 s+2 = = s 2 + 2 s + 2 ( s + 1)2 + 1 ( s + 1 − j )( s + 1 + j )

G(s) has one (finite) zero at −2 and two complex conjugate poles at −1 ± j. In general, a transfer function with m zeros and n poles can be written as G (s ) = k

(s − z1 ) (s − z m ) (s − p1 ) (s − pn )

where k is the gain.

Frequency Response The frequency response of a system is given by its transfer function G(s) evaluated at s = jw, that is, G( jw). The frequency response is a very useful means characterizing a system, since typically it can be determined experimentally, and since control system specifications are frequently expressed in terms of the frequency response. When the poles of G(s) have negative real parts, the system turns out to be bounded-input/bounded-output

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.10

CONTROL SYSTEM DESIGN 19.10

CONTROL SYSTEMS

(BIBO) stable. Under these conditions the frequency response G( jw) has a clear physical meaning, and this fact can be used to determine G( jw) experimentally. In particular, it can be shown that if the input u(t) = k sin (wot) is applied to a system with a stable transfer function G(s) (Y(s) = G(s)U(s)), then the output y(t) at steady state (after all transients have died out) is given by yss(t) = k |G(wo)| sin [wot + q(wq)]

FIGURE 19.1.5 Complex conjugate poles of G(s).

where |G(wo)| denotes the magnitude of G( jwo) and q(wo) = arg G( jwo) is the argument or phase of the complex quantity G( jwo). Applying sinusoidal inputs with different frequencies wo and measuring the magnitude and phase of the output at steady state, it is possible to determine the full frequency response of the system G( jwo) = |G(wo)| e jq(wo).

ANALYSIS OF DYNAMICAL BEHAVIOR System Response, Modes, and Stability It was shown above how the response of a system to an input and under some given initial conditions can be calculated from its differential equation description using Laplace transforms. Specifically, y(t) = L−1{Y (s)} where Y (s) =

n( s) m( s ) U (s) + d (s) d (s)

with n(s)/d(s) = G(s), the system transfer function; the numerator m(s) of the second term depends on the initial conditions and it is zero when all initial conditions are zero, i.e., when the system is initially at rest. In view now of the partial fraction expansion rules, Y(s) can be written as follows: Y (s) =

b s + b0 c ci 2 c1 + L + i1 + +L+ 2 1 + L + I (s) s − p1 s − pi ( s − pi )2 s + a1s + a0

This expression shows real poles of G(s), namely, p1, p2, … , and it allows for multiple poles pi; it also shows complex conjugate poles a ± jb written as second-order terms. I(s) denotes the terms due to the input U(s); they are fractions with the poles of U(s). Note that if G(s) and U(s) have common poles they are combined to form multiple-pole terms. Taking now the inverse Laplace transform of Y(s): y(t ) = L−1{Y ( s)} = c1e p1t + L + ci1e pit + (⋅)te pit + L + e at [(⋅)sin bt + (⋅)cos bt ] + L + i(t ) where i(t) depends on the input. Note that the terms of the form ctk epit are the modes of the system. The system behavior is the aggregate of the behaviors of the modes. Each mode depends primarily on the location of the pole pi; the location of the zeros affects the size of its coefficient c. If the input u(t) is a bounded signal, i.e., |u(t)| < ∞ for all t, then all the poles of I(s) have real parts that are negative or zero, and this implies that I(t) is also bounded for all t. In that case, the response y(t) of the system will also be bounded for any bounded u(t) if and only if all the poles of G(s) have strictly negative real parts. Note that poles of G(s) with real parts equal to zero are not allowed, since if U(s) also has poles at the same locations, y(t) will be unbounded. Take, for example, G(s) = 1/s and consider the bounded step input U(s) = 1/s; the response y(t) = t, which is not bounded. Having all the poles of G(s) located in the open left half of the s-plane is very desirable and it corresponds to the system being stable. In fact, a system is bounded-input, bounded-output (BIBO) stable if and only if all poles of its transfer function have negative real parts. If at least one of the poles has positive Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.11

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.11

real parts, then the system in unstable. If a pole has zero real parts, then the term marginally stable is sometimes used. Note that in a BIBO stable system if there is no forcing input, but only initial conditions are allowed to excite the system, then y(t) will go to zero as t goes to infinity. This is a very desirable property for a system to have, because nonzero initial conditions always exist in most real systems. For example, disturbances such as interference may add charge to a capacitor in an electric circuit, or a sudden brief gust of wind may change the heading of an aircraft. In a stable system the effect of the disturbances will diminish and the system will return to its previous desirable operating condition. For these reasons a control system should first and foremost be guaranteed to be stable, that is, it should always have poles with negative real parts. There are many design methods to stabilize a system or if it is initially stable to preserve its stability, and several are discussed later in this section. Response of First- and Second-Order Systems Consider a system described by a first-order differential equation, namely, y˙(t ) + a0 y(t ) = a0u(t ) and let y(0) = 0. In view of the previous subsection, the transfer function of the system is G (s ) =

a0 s + a0

and the response to a unit step input q(t) (q(t) = 1 for t ≥ 0, q(t) = 0 for t < 0) may be found as follows:  a 1  y(t ) = L−1{Y (s)} = L−1{G (s)U (s)} = L−1  0   s + a0 s   1 −1  −a t = L−1  +  = [1 − e 0 ]q(t )  s s + a0  Note that the pole of the system is p = −a0 (in Fig. 19.1.6 we have assumed that a0 > 0). As that pole moves to the left on the real axis, i.e., as a0 becomes larger, the system becomes faster. This can be seen from the fact that the steady state value of the system response yss = lim y(t ) = 1 t →∞

is approached by the trajectory of y(t) faster, as a0 becomes larger. To see this, note that the value 1 − e−1 is attained at time t = 1/a0, which is smaller as a0 becomes larger. t is the time constant of this first-order system; see below for further discussion of the time constant of a system. We now derive the response of a second-order system to a unit step input (Fig. 19.1.7). Consider a system described by ˙˙ y(t ) + a1 y˙(t ) + a0 y(t ) = a0u(t ) , which gives rise to the transfer function: G (s ) =

FIGURE 19.1.6 Pole location of a first-order system.

a0 s + a1s + a0 2

Here the steady-state value of the response to a unit step is yss = lim sG (s) s→ 0

1 =1 s

Note that this normalization or scaling to 1 is in fact the reason for selecting the constant numerator to be a0. G(s) above does not have any finite zeros—only poles—as we want to study first the effect of the poles on the system behavior. We shall discuss the effect of adding a zero or an extra pole later. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.12

CONTROL SYSTEM DESIGN 19.12

CONTROL SYSTEMS

FIGURE 19.1.7 Step response of a first-order plant.

It is customary, and useful as we will see, to write the above transfer function as G (s ) =

ω n2 s + 2ζω n s + ω n2 2

where z is the damping ratio of the system and wn is the (undamped) natural frequency of the system, i.e., the frequency of oscillations when the damping is zero. The poles of the system are p1,2 = −ζω n ± ω n ζ 2 − 1 When z > 1 the poles are real and distinct and the unit step response approaches its steady-state value of 1 without overshoot. In this case the system is overdamped. The system is called critically damped when z = 1 in which case the poles are real, repeated, and located at −zwn. The more interesting case is when the system is underdamped (z < 1). In this case the poles are complex conjugate and are given by p1,2 = −ζω n ± jω n 1 − ζ 2 = σ + jω d The response to a unit step input in this case is   e −ζω nt y(t ) = 1 − sin(ω d t + θ )  q(t )   1− ζ 2   2 2 where q = cos−1 z = tan−1 ( 1 − ζ / ζ ), ω d = ω n 1 − ζ , and q(t) is the step function. The response to an impulse input (u(t) = d(t)) also called the impulse response h(t) of the system is given in this case by

 e −ζω nt h(t ) = ω n sin ω n  1− ζ 2 



( 1 − ζ t ) q(t) 2

The second-order system is parameterized by the two parameters z and wn. Different choices for z and wn lead to different pole locations and to different behavior of (the modes of) the system. Fig. 19.1.8 shows the relation between the parameters and the pole location.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.13

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.13

FIGURE 19.1.8 Relation between pole location and parameters.

Time Constant of a Mode and of a System. The time constant of a mode cept of a system is the time value that makes |pt| = 1, i.e., t = 1/|p|. For example, in the above first-order system we have seen that t = 1/a0 = RC. A pair of complex conjugate poles p1,2 = s ± jw give rise to the term of the form Cestsin (w t + q ). In this case, t = 1/|s |, i.e., t is again the inverse of the distance of the pole from the imaginary axis. The time constant of a system is the time constant of its dominant modes.

Transient Response Performance Specifications for a Second-Order Underdamped System For the system G (s ) =

ω n2 s 2 + 2ζω n s + ω n2

and a unit step input, explicit formulas for important measures of performance of its transient response can be derived. Note that the steady state is 1 yss = lim sG (s) = 1 s→ 0 s The rise time tr shows how long it takes for the system’s output to rise from 0 to 66 percent of its final value (equal to 1 here) and it can be shown to be tr = (π − θ )/ ω n , where q = cos–1 z and wd = wn 1 − ζ 2 . The settling time ts is the time required for the output to settle within some percentage, typically 2 or 5 percent, of its final value. ts ≅ 4 / ζω n is the 2 percent settling time ( ts ≅ 3 / ζω n is the 5 percent settling time). Before the underdamped system settles, it will overshoot its final value. The peak time tp measures the time it takes for the output to reach its first (and highest) peak value. Mp measures the actual overshoot that occurs in percentage terms of the final value. Mp occurs at time tp, which is the time of the first and largest overshoot. tp =

π , ωd

M p = 100e −ζπ /

1−ζ 2

%

It is important to notice that the overshoot depends only on z. Typically, tolerable overshoot values are between 2.5 and 25 percent, which correspond to damping ratio z between 0.8 and 0.4.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.14

CONTROL SYSTEM DESIGN 19.14

CONTROL SYSTEMS

Effect of Additional Poles and Zeros The addition of an extra pole in the left-half s-plane (LHP) tends to slow the system down—the rise time of the system, for example, will become larger. When the pole is far to the left of the imaginary axis, its effect tends to be small. The effect becomes more pronounced as the pole moves toward the imaginary axis. The addition of a zero in the LHP has the opposite effect, as it tends to speed the system up. Again the effect of a zero far away to the left of the imaginary axis tends to be small. It becomes more pronounced as the zero moves closer to the imaginary axis. The addition of a zero in the right-half s-plane (RHP) has a delaying effect much more severe than the addition of a LHP pole. In fact a RHP zero causes the response (say, to a step input) to start toward the wrong direction. It will move down first and become negative, for example, before it becomes positive again and starts toward its steady-state value. Systems with RHP zeros are called nonminimum phase systems (for reasons that will become clearer after the discussion of the frequency design methods) and are typically difficult to control. Systems with only LHP poles (stable) and LHP zeros are called minimum phase systems.

CLASSICAL CONTROL DESIGN METHODS In this section, we focus on the problem of controlling a single-input and single-output (SISO) LTI plant. It is understood from the above sections that such a plant can be represented by a transfer function Gp(s). The closed-loop system is shown in Fig. 19.1.9. The goal of feedback control is to make the output of the plant, y, follow the reference input r as closely as possible. Classical design methods are those used to determine the controller transfer function Gc(s) so that the closed-loop system, represented by the transfer function: GCL (s) =

Gc (s)G p (s) 1 + Gc (s)G p (s)

has desired characteristics. Design Specifications and Constraints The design specifications are typically described in terms of step response, i.e., r is the set point described as a step-like function. These specifications are given in terms of transient response and steady-state error, assuming the feedback control system is stable. The transient response is characterized by the rise time, i.e., the time it takes for the output to reach 66 percent of its final value, the settling time, i.e., the time it takes for the output to settle within 2 percent of its final value, and the percent overshoot, which is how much the output exceeds the set-point r percentagewise during the period that y converges to r. The steady-state error refers to the difference, if any, between y and r as y reaches its steady-state value. There are many constraints a control designer has to deal with in practice, as shown in Fig. 19.1.10. They can be described as follows: 1. Actuator Saturation: The input u to the plant is physically limited to a certain range, beyond which it “saturates,” i.e., becomes a constant. 2. Disturbance Rejection and Sensor Noise Reduction: There are always disturbances and sensor noises in the plant to be dealt with.

FIGURE 19.1.9 Feedback control configuration.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.15

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.15

FIGURE 19.1.10 Closed-loop simulator setup.

3. Dynamic Changes in the Plant: Physical systems are almost never truly linear nor time invariant. 4. Transient Profile: In practice, it is often not enough to just move y from one operating point to another. How it gets there is sometimes just as important. Transient profile is a mechanism to define the desired trajectory of y in transition, which is of great practical concern. The smoothness of y and its derivatives, the energy consumed, the maximum value, and the rate of change required of the control action are all influenced by the choice of transient profile. 5. Digital Control: Most controllers are implemented today in digital forms, which makes the sampling rate and quantization errors limiting factors in the controller performance.

Control Design Strategy Overview The control strategies are summarized here in ascending order of complexity and, hopefully, performance. 1. Open-Loop Control: If the plant transfer function is known and there is very little disturbance, a simple open loop controller, where Gc(s) is an approximate inverse of Gp(s), as shown in Fig. 19.1.11, would satisfy most design requirements. Such control strategy has been used as an economic means in controlFIGURE 19.1.11 Open-loop control configuration. ling stepper motors, for example. 2. Feedback Control with a Constant Gain: With significant disturbance and dynamic variations in the plant, feedback control, as shown in Fig. 19.1.9, is the only choice; see also the Appendix. Its simplest form is Gc(s) = k, or u = ke, where k is a constant. Such proportional controller is very appealing because of its simplicity. The common problems with this controller are significant steady-state error and overshoot. 3. Proportional-Integral-Derivative Controller: To correct the above problems with the constant gain controller, two additional terms are added: u = k p e + ki ∫ e + kd e

or

Gc (s) = k p + ki / s + kd s

This is the well-known PID controller, which is used by most engineers in industry today. The design can be quite intuitive: the proportional term usually plays the key role, with the integral term added to reduce/eliminate the steady-state error and the derivative term the overshoot. The primary drawbacks of PID are that the integrator introduces phase lag that could lead to stability problems and the differentiator makes the controller sensitive to noise.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.16

CONTROL SYSTEM DESIGN 19.16

CONTROL SYSTEMS

FIGURE 19.1.12 Loop-shaping.

4. Root Locus Method: A significant portion of most current control textbooks is devoted to the question of how to place the poles of the closed-loop system in Fig.19.1.9 at desired locations, assuming we know where they are. Root locus is a graphical technique to manipulate the closed-loop poles given the open-loop transfer function. This technique is most effective if disturbance rejection, plant dynamic variations, and sensor noise are not to be considered. This is because these properties cannot be easily linked to closed-loop pole locations. 5. Loop-Shaping Method: Loop-shaping [5] refers to the manipulation of the loop gain frequency response, L(jw) = Gp(jw)Gc(jw), as a control design tool. It is the only existing design method that can bring most of design specifications and constraints, as discussed above, under one umbrella and systematically find a solution. This makes it a very useful tool in understanding, diagnosing, and solving practical control problems. The loop-shaping process consists of two steps: a. Convert all design specifications to loop gain constraints, as shown in Fig.19.1.12. b. Find a controller Gc(s) to meet the specifications. Loop-shaping as a concept and a design tool helped the practicing engineers greatly in improving the PID loop performance and stability margins. For example, a PID implemented as a lead-lag compensator is commonly seen in industry today. This is where the classical control theory provides the mathematical and design insights on why and how feedback control works. It has also laid the foundation for modern control theory.

motor

load

pulley

FIGURE 19.1.13 A Digital servo control design example.

Example Consider a motion control system as shown in Fig. 19.1.13. It consists of a digital controller, a dc motor drive (motor and power amplifier), and a load of 235 lb that is to be moved linearly by 12 in. in 0.3 s with an accuracy of 1 percent or better. A belt and pulley mechanism is used to convert the motor rotation a linear motion. Here a servo motor is used to drive the load to perform a linear motion. The motor is coupled with the load through a pulley. The design process involves:

1. Selection of components including motor, power amplifier, the belt-and-pulley, the feedback devices (position sensor and/or speed sensor) 2. Modeling of the plant 3. Control design and simulation 4. Implementation and tuning

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.17

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.17

The first step results in a system with the following parameters: 1. Electrical: • Winding resistance and inductance: Ra = 0.4 mho La = 8 mH (the transfer function of armature voltage to current is (1/Ra)/[(La/Ra)s + 1] • back emf constant: KE = 1.49 V/(rad/s) • power amplifier gain: Kpa = 80 • current feedback gain: Kcf = 0.075 V/A 2. Mechanical: • Torque constant: Kt = 13.2 in-lb/A • Motor inertia Jm = .05 lb-in.s2 • Pulley radius Rp = 1.25 in. • Load weight: W = 235 lb (including the assembly) • Total inertia Jt = Jm + Jl = 0.05 + (W/g)Rp2 = 1.0 lb-in.s2 With the maximum armature current set at 100 A, the maximum torque = KtIa,max = 13.2 × 100 = 1320 in.-lb; the maximum angular acceleration = 1320/Jt = 1320 rad/s2, and the maximum linear acceleration = 1320 × Rp = 1650 in./s2 = 4.27 g’s (1650/386). As it turned out, they are sufficient for this application. The second step produces a simulation model (Fig. 19.1.14). A simplified transfer function of the plant, from the control input, vc(in volts), to the linear position output, xout(in inches), is Gp ( s ) =

206 s( s + 3)

An open loop controller is not suitable here because it cannot handle the torque disturbances and the inertia change in the load. Now consider the feedback control scheme in Fig. 19.1.9 with a constant controller, u = ke. The root locus plot in Fig. 19.1.15 indicates that, even at a high gain, the real part of the closedloop poles does not exceed −1.5, which corresponds to a settling time of about 2.7 s. This is far slower than desired. In order to make the system respond faster, the closed-loop poles must be moved further away from the jw axis. In particular, a settling time of 0.3 s or less corresponds to the closed-loop poles with real parts smaller than −13.3. This is achieved by using a PD controller of the form Gc(s) = K(s + 3);

K ≥ 13.3/206

will result in a settling time of less than 0.3 s. The above PD design is a simple solution in servo design that is commonly used. There are several issues, however, that cannot be completely resolved in this framework: 1. Low-frequency torque disturbance induces steady-state error that affects the accuracy. 2. The presence of a resonant mode within or close to the bandwidth of the servo loop may create undesirable vibrations. 3. Sensor noise may cause the control signal to be very noisy. 4. The change in the dynamics of the plant, for example, the inertia of the load, may require frequency tweaking of the controller parameters. 5. The step-like set point change results in an initial surge in the control signal and could shorten the life span of the motor and other mechanical parts. These are problems that most control textbooks do not adequately address, but they are of significant importance in practice. The first three problems can be tackled using the loop-shaping design technique introduced above. The tuning problem is an industrywide design issue and the focus of various research and development efforts. The last problem is addressed by employing a smooth transient as the set point, instead of a step-like set point. This is known as the “motion profile” in industry.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

+/−8V

FIGURE 19.1.14

control signal

Vc

1

80

+ −

la

Armature Dynamics

2.5 .02s + 1

Simulation model of the motion control system.

Power amp +/−160V Kpa

− +

armature voltage

2

Ke

+ +

back emf ect 1.49

Kt

13.2

1/Jt

1.0

s

1

angular velocity

acc to vel

angular total accelation torque

Rp

1.25

s

1

1

la

3

Vout

2

Xout

linear position

vel to pos

linear velocity

11:36 AM

torque disturbance

10/27/04

.075

Kcf

Christiansen_Sec_19.qxd Page 19.18

CONTROL SYSTEM DESIGN

19.18

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.19

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.19

FIGURE 19.1.15 Root locus plot of the servo design problem.

Evaluation of Control Systems Analysis of control system provides crucial insights to control practitioners on why and how feedback control works. Although the use of PID precedes the birth of classical control theory of the 1950s by at least two decades, it is the latter that established the control engineering discipline. The core of classical control theory are the frequency-response-based analysis techniques, namely, Bode and Nyquist plots, stability margins, and so forth. In particular, by examining the loop gain frequency response of the system in Fig. 19.1.9, that is, L( jw) = Gc( jw)Gp( jw), and the sensitivity function 1/[1 + L(jw)], one can determine the following: 1. How fast the control system responds to the command or disturbance input (i.e., the bandwidth). 2. Whether the closed-loop system is stable (Nyquist Stability Theorem); if it is stable, how much dynamic variation it takes to make the system unstable (in terms of the gain and phase change in the plant). It leads to the definition of gain and phase margins. More broadly, it defines how robust the control system is. 3. How sensitive the performance (or closed-loop transfer function) is to the changes in the parameters of the plant transfer function (described by the sensitivity function). 4. The frequency range and the amount of attenuation for the input and output disturbances shown in Fig. 19.1.10 (again described by the sensitivity function). Evidently, these characteristics obtained via frequency-response analysis are invaluable to control engineers. The efforts to improve these characteristics led to the lead-lag compensator design and, eventually, loopshaping technique described above. Example: The PD controller in Fig. 19.1.10 is known to be sensitive to sensor noise. A practical cure to

this problem is to add a low pass filter to the controller to attenuate high-frequency noise, that is, Gc (s) =

13.3(s + 3)  s  206  +1  133 

2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.20

CONTROL SYSTEM DESIGN 19.20

CONTROL SYSTEMS

FIGURE 19.1.16 Bode plot evaluation of the control design.

The loop gain transfer function is now L( s) = Gp ( s)Gc ( s) =

13.3 2 s  s + 1  133 

The bandwidth of the low pass filter is chosen to be one decade higher than the loop gain bandwidth to maintain proper gain and phase margins. The Bode plot of the new loop gain, as shown in Fig. 19.1.16, indicates that (a) the feedback system has a bandwidth 13.2 rad/s, which corresponds to a 0.3 s settling time as specified and (b) this design has adequate stability margins (gain margin is 26 dB and phase margin is 79°). Digital Implementation Once the controller is designed and simulated successfully, the next step is to digitize it so that it can be programmed into the processor in the digital control hardware. To do this: 1. Determine the sampling period Ts and the number of bits used in analog-to-digital converter (ADC) and digital-to-analog converter (DAC). 2. Convert the continuous time transfer function Gc(s) to its corresponding form in discrete time transfer function Gcd(z) using, for example, Tustin’s method, s = (1/T)(z − 1)/(z + 1). 3. From Gcd(z), derive the difference equation, u(k) = g(u(k − 1), u(k − 2), . . . y(k), y(k – 1), . . .), where g is a linear algebraic function. After the conversion, the sampled data system, with the plant running in continuous time and the controller in discrete time, should be verified in simulation first before the actual implementation. The quantization error and sensor noise should also be included to make it realistic. The minimum sampling frequency required for a given control system design has not been established analytically. The rule of thumb given in control textbooks is that fs = 1/Ts should be chosen approximately 30 to 60 times the bandwidth of the closed-loop system. Lower-sampling frequency is possible after careful tuning but the aliasing, or signal distortion, will occur when the data to be sampled have significant energy above the Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.21

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.21

Nyquist frequency. For this reason, an antialiasing filter is often placed in front of the ADC to filter out the high-frequency contents in the signal. Typical ADC and DAC chips have 8, 12, and 16 bits of resolution. It is the length of the binary number used to approximate an analog one. The selection of the resolution depends on the noise level in the sensor signal and the accuracy specification. For example, the sensor noise level, say 0.1 percent, must be below the accuracy specification, say 0.5 percent. Allowing one bit for the sign, an 8-bit ADC with a resolution of 1/27, or 0.8 percent, is not good enough; similarly, a 16-bit ADC with a resolution of 0.003 percent is unnecessary because several bits are “lost” in the sensor noise. Therefore, a 12-bit ADC, which has a resolution of 0.04 percent, is appropriate for this case. This is an example of “error budget,” as it is known among designers, where components are selected economically so that the sources of inaccuracies are distributed evenly. Converting Gc(s) to Gcd(z) is a matter of numerical integration. There have been many methods suggested; some are too simple and inaccurate (such as the Euler’s forward and backward methods), others are too complex. Tustin’s method suggested above, also known as trapezoidal method or bilinear transformation, is a good compromise. Once the discrete transfer function Gcd(z) is obtained, finding the corresponding difference equation that can be easily programmed in C is straightforward. For example, given a controller with input e(k) and output u(k), and the transfer function Gcd ( z ) =

z + 2 1 + 2 z −1 = z + 1 1 + z −1

the corresponding input-output relationship is u( k ) =

1 + 2 q −1 e( k ) 1 + q −1

or equivalently, (1 + q−1)u(k) = (1 + 2q−1)e(k). [q−1u(k) = u(k − 1), i.e., q−1 is the unit time delay operater.] That is, u(k) = −u(k − 1) + e(k) + 2e(k − 1). Finally, the presence of the sensor noise usually requires that an antialiasing filter be used in front of the ADC to avoid distortion of the signal in ADC. The phase lag from such a filter must not occur at the crossover frequency (bandwidth) or it will reduce the stability margin or even destabilize the system. This puts yet another constraint on the controller design.

ALTERNATIVE DESIGN METHODS Nonlinear PID Using a nonlinear PID (NPID) is an alternative to PID for better performance. It maintains the simplicity and intuition of PID, but empowers it with nonlinear gains. An example of NPID is shown in Fig. 19.1.17. The need for the integral control is reduced, by making the proportional gain larger, when the error is small. The limited

FIGURE 19.1.17 Nonlinear PID for a power converter control problem.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.22

CONTROL SYSTEM DESIGN 19.22

CONTROL SYSTEMS

authority integral control has its gain zeroed outside a small interval around the origin to reduce the phase lag. Finally the differential gain is reduced for small errors to reduce sensitivities to sensor noise. See Ref. 8. State Feedback and Observer-Based Design If the state space model of the plant x˙ = Ax + Bu y = Cx + Du is available, the pole-placement design can be achieved via state feedback u = r + Kx where K is the gain vector to be determined so that the eigenvalues of the closed-loops system x˙ = ( A + BK ) x + Br y = Cx + Du are at the desired locations, assuming they are known. Usually the state vector is not available through measurements and the state observer is of the form xˆ˙ = Axˆ + Bu + L( y − yˆ ) yˆ = Cxˆ + Du where xˆ is the estimate of x and L is the observer gain vector to be determined. The state feedback design approach has the same drawbacks as those of Root Locus approach, but the use of the state observer does provide a means to extract the information about the plant that is otherwise unavailable in the previous control schemes, which are based on the input-output descriptions of the plant. This proves to be valuable in many applications. In addition, the state space methodologies are also applicable to systems with many inputs and outputs. Controllability and Observability. Controllability and observability are useful system properties and are defined as follows. Consider an nth order system described by x = Ax + Bu,

y = Cx + Du

where A is an n × n matrix. The system is controllable if it is possible to transfer the state to any other state in finite time. This property is important as it measures, for example, the ability of a satellite system to reorient itself to face another part of the earth’s surface using the available thrusters; or to shift the temperature in an industrial oven to a specified temperature. Two equivalent tests for controllability are: The system (or the pair (A, B)) is controllable if and only if the controllability matrix C = [B, AB,…, An−1B] has full (row) rank n. Equivalently if and only if [siI − A, B] has full (row) rank n for all eigenvalues si of A. The system is observable if by observing the output and the input over a finite period of time it is possible to deduce the value of the state vector of the system. If, for example, a circuit is observable it may be possible to determine all the voltages across the capacitors and all currents through the inductances by observing the input and output voltages. The system (or the pair (A, C)) is observable if and only if the observability matrix  C    CA  θ=    n−1  CA 

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.23

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.23

has full (column) rank n. Equivalently if and only if si I − A  C    has full (column) rank n for all eigenvalues si of A. Consider now the transfer function G(s) = C(sI − A)−1B + D Note that, by definition, in a transfer function all possible cancellations between numerator and denominator polynomials are assumed to have already taken place. In general, therefore, the poles of G(s) are some (or all) of the eigenvalues of A. It can be shown that when the system is both controllable and observable no cancellations take place and so in this case the poles of G(s) are exactly the eigenvalues of A. Eigenvalue Assignment Design. Consider the equations: x˙ = Ax + Bu, y = Cx + Du, and u = r + Kx. When the system is controllable, K can be selected to assign the closed-loop eigenvalues to any desired locations (real or complex conjugate) and thus significantly modify the behavior of the open-loop system. Many algorithms exist to determine such K. In the case of a single input, there is a convenient formula called Ackermann’s formula K = −[0, … , 0, 1] C −1 ad(A) where C = [B, . . . , An−1B] is the n × n controllability matrix and the roots of ad(s) are the desired closed-loop eigenvalues. Example Let

1/2 1  A= ,  1 2

1 B=  1

and the desired eigenvalues be –1 ± j. Here 1 3/ 2  C = [ B, AB] =   1 3  Note that A has eigenvalues at 0 and 5/2. We wish to determine K so that the eigenvalues of A + BK are at −1 ± j, which are the roots of ad(s) = s2 + 2s + 2. Here 1 3/ 2  C = [ B, AB] =   1 3  and  1/2 1  2 1/2 1  1 0  17/4 9/2  + 2 a d ( A) = A 2 + 2 A + 2 I =    =   + 2    1 2  0 1   9/2 11    1 2 Then K = −[0 1]C −1a d ( A) = [ −1/6 −13/3]

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.24

CONTROL SYSTEM DESIGN 19.24

CONTROL SYSTEMS

Here 1/3 −10/3 A + BK =   5/6 −7/3  which has the desired eigenvalues. Linear Quadratic Regulator (LQR) Problem.

Consider

x = Ax + Bu,

y = Cx

We wish to determine u(t), t ≥ 0, which minimizes the quadratic cost J (u) =



∫ 0  x T (t )( M T QM ) x + uT (t ) Ru(t )  dt

for any initial state x(0). The weighting matrices Q and R are real, symmetric (Q = QT, R = RT), Q and R are positive definite (R > 0, Q > 0) and MTQM is positive semidefinite (MTQM ≥ 0). Since R > 0, the term uTRu is always positive for any u ≠ 0, by definition. Minimizing its integral forces u(t) to remain small. MTQM ≥ 0 implies that xTMTQMx is positive, but it can also be zero for some x ≠ 0; this allows some of the states to be treated as “do not care states.” Minimizing the integral of xTMTQMx forces the states to become smaller as time progresses. It is convenient to take Q (and R in the multi-input case) to be diagonal with positive entries on the diagonal. The above performance index is designed so that the minimizing control input drives the states to the zero state, or as close as possible, without using excessive control action, in fact minimizing the control energy. When (A, B, Q1/2M) is controllable and observable, the solution u*(t) of this optimal control problem is a state feedback control law, namely, u*(t) = K*x(t) = −R−1 BT Pc* x(t) where Pc* is the unique symmetric positive definite solution of the algebraic Riccati equation: ATPc + PcA − PcBR−1BTPc + MTQM = 0 Example. Consider

0  0 1  x˙ =   x + 1 u, y = [1 0]x 0 0     And let J=

∫ 0 ( y2 (t ) + 4u 2 (t ) ) dt ∞

Here M = C,

Q = 1,

1 1 0 M T QM = C T C =   [1 0] =  , 0 0 0

R=4

Solving the Riccati equation we obtain 2  2 Pc* =   2 2 2  and

[

]

2  2 1 1 u* (t ) = K * x (t ) = − [0 1] x (t ) = − 1, 2 x (t )  2 2 2 4 2  

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.25

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.25

Linear State Observers. Since the states of a system contain a great deal of useful information, knowledge of the state vector is desirable. Frequently, however, it may be either impossible or impractical to obtain measurements of all states. Therefore, it is important to be able to estimate the states from available measurements, namely, of inputs and outputs. Let the system be x = Ax + Bu,

y = Cx + Du

An asymptotic state estimator of the full state, also called Luenberger observer, is given by xˆ˙ = Axˆ + Bu + L( y − yˆ ) where L is selected so that all eigenvalues of A − LC are in the LHP (have negative real parts). Note that a L that arbitrarily assigns the eigenvalues of A − LC exists if and only if the system is observable. The observer may be written as u  xˆ = ( A − LC ) xˆ + [ B − LD, K ]    y which clearly shows the role of u and y; they are the inputs to the observer. If the error e(t ) = x (t ) − xˆ (t ) then e(t) = e[(A−LC)t]e(0), which shows that e(t ) → 0 or xˆ (t ) → x (t ) as t → ∞ . To determine appropriate L, note that ( A − LC )T = AT + C T ( − L ) = A + BK , which is the problem addressed above in the state feedback case. One could also use the following observable version of Ackermann’s formula, namely, L = ad(A) O–1[0, … 0,1]T where  C     CA  O=     n−1  CA   The gain L in the above estimator may be determined so that it is optimal in an appropriate sense. In the following, some of the key equations of such an optimal estimator (Linear Quadratic Gaussian (LQG)), also known as the Kalman-Bucy filter, are briefly outlined. Consider x = Ax + Bu + Γw,

y = Cx + v

where w and v represent process and measurement noise terms. Both w and v are assumed to be white, zeromean Gaussian stochastic processes, i.e., they are uncorrelated in time and have expected values E[w] = 0 and E[v] = 0. Let E [ ww T ] = W , E [vvT ] = V denote the covariances where W, V are real, symmetric and positive definite matrices. Assume also that the noise processes w and v are independent, i.e., E[wvT] = 0. Also assume that the initial state x(0) is a Gaussian random variable of known mean, E[x(0)] = x0, and known covariance E[(x(0) − x0)(x(0) − x0)T] = Pe0. Assume also that x(0) is independent of w and v. Consider now the estimator xˆ˙ = ( A − LC ) xˆ + Bu + Ly T and let (A, Γ W1/2, C) be controllable and observable. Then the error covariance E[( x − xˆ )( x − xˆ ) ] is mini* * T –1 * mized when the filter gain L = PeC V , where Pe denotes the symmetric, positive definite solution of the (dual to control) algebraic Riccati equation

Pe AT + APe − PeCTV−1CPe + ΓWΓT = 0

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.26

CONTROL SYSTEM DESIGN 19.26

CONTROL SYSTEMS

The above Riccati is the dual to the Riccati equation for optimal control and can be obtained from the optimal control equation by making use of the substitutions: A → AT, B → CT, M → ΓT, R → V, Q → W In the state feedback control law u = Kx + r, when state measurements are not available, it is common to use the estimate of state xˆ from a Luenberger observer. That is, given x = Ax + Bu,

y = Cx + Du

the control law is u = Kxˆ + r , where xˆ is the state estimate from the observer u  xˆ = ( A − LC ) xˆ + [ B − KD, K ]    y The closed-loop system is then of order 2n since the plant and the observer are each of order n. It can be shown that in this case, of linear output feedback control design, the design of the control law and of the gain K (using, for example, LQR) can be carried out independent of the design of the estimator and the filter gain L (using, for example, LQG). This is known as the separation property. It is remarkable to notice that the overall transfer function of the compensated system that includes the state feedback and the observer is T(s) = (C + DK)[sI − (A + BK)]−1 B + D which is exactly the transfer function one would obtain if the state x were measured directly and the state observer were not present. This is of course assuming zero initial conditions (to obtain the transfer function); if nonzero initial conditions are present, then there is some deterioration of performance owing to observer dynamics, and the fact that at least initially the state estimate typically contains significant error.

ADVANCED ANALYSIS AND DESIGN TECHNIQUES The foregoing section described some fundamental analysis and design methods in classical control theory, the development of which was primarily driven by engineering practice and needs. Over the last few decades, vast efforts in control research have led to the creation of modern mathematical control theory, or advanced control, or control science. This development started with optimal control theory in the 50s and 60s to study the optimality of control design; a brief glimpse of optimal control was given above. In optimal control theory, a cost function is to be minimized, and analytical or computational methods are used to derive optimal controllers. Examples include minimum fuel problem, time-optimal control (BangBang) problem, LQ, H2, and H∞, each corresponding to a different cost function. Other major branches in modern control theory include multi-input multi-output (MIMO) control systems methodologies, which attempt to extend well-known SISO design methods and concepts to MIMO problems; adaptive control, designed to extend the operating range of a controller by adjusting automatically the controller parameters based on the estimated dynamic changes in the plants; analysis and design of nonlinear control systems, and so forth. A key problem is the robustness of the control system. The analysis and design methods in control theory are all based on the mathematical model of the plants, which is an approximate description of physical processes. Whether a control system can tolerate the uncertainties in the dynamics of the plants, or how much uncertainty it takes to make a system unstable, is studied in robust control theory, where H2, H∞, and other analysis and design methods were originated. Even with recent progress, open problems remain when dealing with real world applications. Some recent approaches such as in Ref. 8 attempt to address some of these difficulties in a realistic way.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.27

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.27

APPENDIX: OPEN- AND CLOSED-LOOP STABILIZATION It is impossible to stabilize an unstable system using open-loop control, owing to system uncertainties. In general, closed-loop or feedback control is necessary to control a system—stabilize if unstable and improve performance—because of uncertainties that are always present. Feedback provides current information about the system and so the controller does not have to rely solely on incomplete system information contained in a nominal plant model. These uncertainties are system parameter uncertainties and also uncertainties induced on the system by its environment, including uncertainties in the initial condition of the system, and uncertainties because of disturbances and noise. Consider the plant with transfer function G( s ) =

1 s − (1 + e )

where the pole location at 1 is inaccurately known.

The corresponding description in the time domain using differential equations is y˙ (t) – (1 + e)y(t) = u(t). Solving, using Laplace transform, we obtain sY(s) − y(0) − (1 + e)Y(s) = U(s) from which Y (s) =

y( 0 ) 1 U (s) + s − (1 + e ) s − (1 + e )

Consider now the controller with transfer function Gc (s) =

s −1 s+2

The corresponding description in the time domain using differential equations is u˙ (t) + 2u(t) = r˙ (t) − r(t). Solving, using Laplace transform, we obtain sU(s) − u(0) + 2U(s) = sR(s) − R(s) from which U (s) =

u(0) s − 1 + R( s) s+2 s−2

Connect now the plant and the controller in series (open-loop control)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.28

CONTROL SYSTEM DESIGN 19.28

CONTROL SYSTEMS

The overall transfer function is T = GGc =

s −1 [s − (1 + ε )](s + 2)

Including the initial conditions s −1 ( s + 2) y(0) + u(0) R( s) + [ s − (1 + e )](s + 2) [ s − (1 + e )]( s + 2)

Y (s) =

It is now clear that open-loop control cannot be used to stabilize the plant: 1. First, because of the uncertainties in the plant parameters. Note that the plant pole is not exactly at +1 but at 1 + e and so the controller zero cannot cancel the plant pole exactly. 2. Second, even if we had knowledge of the exact pole location, that is, e = 0, and Y (s) =

( s + 2 ) y( 0 ) + r ( 0 ) 1 R( s) + s+2 ( s − 1)( s + 2)

still we cannot stabilize the system because of the uncertainty in the initial conditions. We cannot, for example, select r(0) so as to cancel the unstable pole at +1 because y(0) may not be known exactly. We shall now stabilize the above plant using a simple feedback controller.

Consider a unity feedback control system with the controller being just a gain k to be determined. The closedloop transfer function is T (s) =

kG( s) k = 1 + kG( s) s − (1 + e ) + k

Working in the time domain, y˙ – (1 + e)y = u = k(r − y) from which y˙ + [k − (1 + e)]y = kr Using Laplace transform we obtain sY(s) − y(0) + [k − (1 + e)]Y(s) = kR(s) and Y (s) =

y( 0 ) k R( s) + s + k − (1 + e ) s + k − (1 + e )

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_19.qxd

10/27/04

11:36 AM

Page 19.29

CONTROL SYSTEM DESIGN CONTROL SYSTEM DESIGN

19.29

It is now clear that if the controller gain is selected so that k > 1 + e, then the system will be stable. Is fact, we could have worked with the nominal system to derive k > 1 for stability. For stability robustness, we take k somewhat larger than 1 to have some safety margin and satisfy k > 1 + e for some unknown small e.

REFERENCES 1. Dorf, R. C., and R. H. Bishop, “Modern Control Systems,” 9th ed., Prentice Hall, 2001. 2. Franklin, G. F., J. D. Powell, and A. Emami-Naeimi, “Feedback Control of Dynamic Systems,” 3rd ed., Addison-Wesley, 1994. 3. Kuo, B. C., “Automatic Control Systems,” 7th ed., Prentice Hall, 1995. 4. Ogata, K., “Modern Control Engineering,” 3rd ed., Prentice Hall, 1997. 5. Rohrs, C. E., J. L. Melsa, and D. G. Schultz, “Linear Control Systems,” McGraw-Hill, 1993. 6. Antsaklis, P. J., and A. N. Michel, “Linear Systems,” McGraw-Hill, 1997. 7. Goodwin, G. C., S. F. Graebe, and M. E. Salgado, “Control System Design,” Prentice Hall, 2001. 8. Gao, Z., Y. Huang, and J. Han, “An Alternative Paradigm for Control System Design,” Presented at the 40th IEEE Conference on Decision and Control, Dec 4–7, 2001.

ON THE CD-ROM: “A Brief Review of the Laplace Transform Useful in Control Systems,” by the authors of this section, examines its usefulness in control systems analysis and design.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 20

AUDIO SYSTEMS Although much of this section contains fundamental information on basic audio technology, extensive information on new evolving digital audio formats and recording and reproduction systems has been added since the last edition. As noted herein, DVD-Audio, for example, is based on the same DVD technology as DVD-Video discs and DVD-ROM computer discs. It has a theoretical sampling rate of 192 kHz with 24-bit processing and can store 4.7 gigabytes on a disc with a choice of two- or six-channel audio tracks or a mix of both. Super Audio CD (SACD) has the same storage capacity. It uses direct stream digital (DSD) with 2.8 MHz sampling in three possible disc types. The first two contain only DSD data (4.7 gigabytes of data on a single-layer disc and slightly less than 9 gigabytes on the dual layer disc). The third version, the SACD hybrid, combines a single 4.7 gigabyte layer with a conventional CD that can be played back on conventional CD players. MPEG audio coding variations continue to evolve. For example: • MPEG-1 is a low-bit-rate audio format. • MPEG-2 extends MPEG-1 toward the audio needs of digital video broadcasting. • MPEG-2 Advanced Audio Coding (AAC) is an enhanced multichannel coding system. • MP3 is the popular name for MPEG-1 Layer III. • MPEG-4 adds object-based representation, content-based interactivity, and scalability. • MPEG-7 defines a universal standardized mechanism for exchanging descriptive data. • MPEG-21 defines a multimedia framework to enable transparent and augmented use of multimedia services across a wide range of networks and devices used by different communities. R.J.

In This Section: CHAPTER 20.1 SOUND UNITS AND FORMATS STANDARD UNITS FOR SOUND SPECIFICATION TYPICAL FORMATS FOR SOUND DATA REFERENCES

20.3 20.3 20.7 20.8

CHAPTER 20.2 SPEECH AND MUSICAL SOUNDS SPEECH SOUNDS MUSICAL SOUNDS REFERENCES

20.9 20.9 20.12 20.17

CHAPTER 20.3 MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES LOUDSPEAKERS EARPHONES REFERENCES

20.18 20.18 20.25 20.34 20.36

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.2

AUDIO SYSTEMS

CHAPTER 20.4 DIGITAL AUDIO RECORDING AND REPRODUCTION INTRODUCTION DIGITAL ENCODING AND DECODING TRANSMISSION AND RECEPTION OF THE DIGITAL AUDIO SIGNAL DIGITAL AUDIO TAPE RECORDING AND PLAYBACK DIGITAL AUDIO DISC RECORDING AND PLAYBACK OTHER APPLICATIONS OF DIGITAL SIGNAL PROCESSING REFERENCES

On the CD-ROM: The following are reproduced from the 4th edition of this handbook: “Ambient Noise and Its Control,” by Daniel W. Martin; “Acoustical Environment Control,” by Daniel W. Martin; “Mechanical Disc Reproduction Systems,” by Daniel W. Martin; “Magnetic-Tape Analog Recording and Reproduction,” by Daniel W. Martin.

20.2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

20.37 20.37 20.37 20.40 20.42 20.44 20.49 20.52

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 20.1

SOUND UNITS AND FORMATS Daniel W. Martin, Ronald M. Aarts

STANDARD UNITS FOR SOUND SPECIFICATION1,2 Sound Pressure Airborne sound waves are a physical disturbance pattern in the air, an elastic medium, traveling through the air at a speed that depends somewhat on air temperature (but not on static air pressure). The instantaneous magnitude of the wave at a specific point in space and time can be expressed in various ways, e.g., displacement, particle velocity, and pressure. However, the most widely used and measured property of sound waves is sound pressure, the fluctuation above and below atmospheric pressure, which results from the wave. An atmosphere (atm) of pressure is typically about 105 pascals (Pa) in the International System of units. Sound pressure is usually a very small part of atmospheric pressure. For example, the minimum audible sound pressure (threshold of hearing) at 2000 Hz is 20 mPa, or 2(10)–10 atm.

Sound-Pressure Level Sound pressures important to electronics engineering range from the weakest noises that can interfere with sound recording to the strongest sounds a loudspeaker diaphragm should be expected to radiate. This range is approximately 106. Consequently, for convenience, sound pressures are commonly plotted on a logarithmic scale called sound-pressure level expressed in decibels (dB). The decibel, a unit widely used for other purposes in electronics engineering, originated in audio engineering (in telephony), and is named for Alexander Graham Bell. Because it is logarithmic, it requires a reference value for comparison just as it does in other branches of electronics engineering. The reference pressure for sounds in air, corresponding to 0 dB, has been defined as a sound pressure of 20 mPa (previously 0.0002 dyn/cm2). This is the reference sound pressure p0 used throughout this section of the handbook. Thus the sound-pressure level Lp in decibels corresponding to a sound pressure p is defined by Lp = 20 log (p/p0) dB

(1)

The reference pressure p0 approximates the weakest audible sound pressure at 2000 Hz. Consequently most decibel values for sound levels are positive in sign. Figure 20.1.1 relates sound-pressure level in decibels to sound pressure in micropascals. Sound power and sound intensity (power flow per unit area of wavefront) are generally proportional to the square of the sound pressure. Doubling the sound pressure quadruples the intensity in the sound field, requiring four times the power from the sound source.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

20.3

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.4

SOUND UNITS AND FORMATS 20.4

AUDIO SYSTEMS

FIGURE 20.1.1 Relation between sound pressure and sound-pressure level.1

FIGURE 20.1.2 The auditory area.

Audible Frequency Range The international abbreviation Hz (hertz) is now used (instead of the former cps) for audible frequencies as well as the rest of the frequency domain. The limits of audible frequency are only approximate because tactile sensations below 20 Hz overlap aural sensations above this lower limit. Moreover, only young listeners can hear pure sounds near or above 20 kHz, the nominal upper limit. Frequencies beyond both limits, however, have significance to audio-electronics engineers. For example, near-infrasonic (below 20 Hz) sounds are needed for classical organ music but can be noise in turntable rumble. Near-ultrasonic (above 20 kHz) intermodulation in audio circuits can produce undesirable differencefrequency components, which are audible. The audible sound-pressure level range can be combined with the audible frequency range to describe an auditory area, shown in Fig. 20.1.2. The lowest curve shows the weakest audible sound-pressure level for listening with both ears to a pure tone while facing the sound source in a free field. The minimum level depends greatly on the frequency of the sound. It also varies somewhat among listeners. The levels that quickly produce discomfort or pain for listeners are only approximate, as indicated by the shaded and crosshatched areas of Fig. 20.1.2. Extended exposure can produce temporary (or permanent) loss of auditory area at sound-pressure levels as low as 90 dB.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.5

SOUND UNITS AND FORMATS SOUND UNITS AND FORMATS

20.5

Wavelength effects are of great importance in the design of sound systems and rooms because wavelength varies over a 3-decade range, much wider than is typical elsewhere in electronics engineering. Audible sound waves vary in length from 1 cm to 15 m. The dimensions of the sound sources and receivers used in electroacoustics also vary greatly, e.g., from 1 cm to 3 m. Sound waves follow the principles of geometrical optics and acoustics when the wavelength is very small relative to object size and pass completely around obstacles much smaller than a wavelength. This wide range of physical effects complicates the typical practical problem of sound production or reproduction. Loudness Level The simple, direct method for determining experimentally the loudness level of a sound is to match its observed loudness with that of a 1000-Hz sinewave reference tone of calibrated, variable sound-pressure level. (Usually this is a group judgment, or an average of individual judgments, in order to overcome individual observer differences.) When the two loudnesses are matched, the loudness level of the sound, expressed in phons, is defined as numerically equal to the sound-pressure level of the reference tone in decibels. For example, a series of observers, each listening alternately to a machine noise and to a 1000-Hz reference tone, judge them (on the average) to be equally loud when the reference tone is adjusted to 86 dB at the observer location. This makes the loudness level of the machine noise 86 phons. At 1000 Hz the decibel and phon levels are numerically identical, by definition. However, at other frequencies sinewave tones may have numerically quite different sound- and loudness-levels, as seen in Fig. 20.1.3. The dashed contour curves show the decibel level at each frequency corresponding to the loudness level identifying the curve at 1000 Hz. For example, a tone at 80 Hz and 70 dB lies on the contour marked 60 phons. Its sound level must be 70 dB for it to be as loud as a 60-dB tone at 1000 Hz. Such differences at low frequencies, especially at low sound levels, are a characteristic of the sense of hearing. The fluctuations above 1000 Hz are caused by sound-wave diffraction around the head of the listener and resonances in his ear canal. This illustrates how human physiological and psychological characteristics comFIGURE 20.1.3 Equal-loudness-level contours. plicate the application of purely physical concepts. Since loudness level is related to 1000-Hz tones defined physically in magnitude, the loudness-level scale is not really psychologically based. Consequently, although one can say that 70 phons is louder than 60 phons, one cannot say how much louder. Loudness By using the phon scale to overcome the effects of frequency, psychophysicists have developed a true loudness scale based on numerous experimental procedures involving relative-loudness judgments. Loudness, measured in sones, has a direct relation to loudness level in phons, which is approximated in Fig. 20.1.4. (Below 30 phons the relation changes slope. Since few practical problems require that range, it is omitted for simplicity.) A loudness of 1 sone has been defined equivalent to a loudness level of 40 phons. It is evident in Fig. 20.1.4 that a 10-phon change doubles the loudness in sones, which means twice as loud. Thus a 20-phon change in loudness level quadruples the loudness. Another advantage of the sone scale is that the loudness of components of a complex sound are additive on the sone scale as long as they are well separated on the frequency scale. For example (using Fig. 20.1.4), two tonal components at 100 and 4000 Hz having loudness levels of 70 and 60 phons, respectively, would have individual loudnesses of 8 and 4 sones, respectively, and a total loudness of 12 sones.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.6

SOUND UNITS AND FORMATS 20.6

AUDIO SYSTEMS

FIGURE 20.1.5 Typical line spectrum.1

FIGURE 20.1.4 Relation between loudness in sones and loudness level in phons.3

FIGURE 20.1.6 Continuous-level spectrum curve for a motor and blower.1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.7

SOUND UNITS AND FORMATS SOUND UNITS AND FORMATS

20.7

Detailed loudness computation procedures have been developed for highly complex sounds and noises, deriving the loudness in sones directly from a complete knowledge of the decibel levels for individual discrete components or noise bands. The procedures continue to be refined.

TYPICAL FORMATS FOR SOUND DATA Sound and audio electronic data are frequently plotted as functions of frequency, time, direction, distance, or room volume. Frequency characteristics are the most common, in which the ordinate may be sound pressure, sound power, output-input ratio, percent distortion, or their logarithmic-scale (level) equivalents. Sound Spectra The frequency spectrum of a sound is a description of its resolution into components of different frequency and amplitude. Often the abscissa is a logarithmic frequency scale or a scale of octave (or fractional-octave) bands with each point plotted at the geometric mean of its band-limiting frequencies. Usually the ordinate scale is sound-pressure level. Phase differences are often ignored (except as they affect sound level) because they vary so greatly with measurement location, especially in reflective environments. Line spectra are bar graphs for sounds dominated by discrete frequency components. Figure 20.1.5 is an example. Continuous spectra are curves showing the distribution of sound-pressure level within a frequency range densely packed with components. Figure 20.1.6 is an example. Unless stated otherwise, the ordinate of a continuous-spectrum curve, called spectrum level, is assumed to represent sound-pressure level for a band of 1-Hz width. Usually level measurements Lband are made in wider bands, then converted to spectrum level Lps by the bandwidth correction Lps = Lband – 10 log( f2 – f1) dB

(2)

in which f1 and f2 are the lower- and upper-frequency limits of the band. When a continuous-spectrum curve is plotted automatically by a level recorder synchronized with a heterodyning filter or with a sequentially switched set of narrow-bandpass filters, any effect of changing bandwidth on curve slope must be considered. Combination spectra are appropriate for many sounds in which strong line components are superimposed over more diffuse continuous spectral backgrounds. Bowed or blown musical tones and motor-driven fan noises are examples. Octave spectra, in which the ordinate is the sound-pressure level for bands one octave wide, are very convenient for measurements and for specifications but lack fine spectrum detail. Third-octave spectra provide more detail and are widely used. One-third of an octave and one-tenth of a decade are so nearly identical that substituting the latter for the former is a practical convenience, providing a 10-band pattern that repeats every decade. Placing third-octave band zero at 1 Hz has conveniently made the band numbers equal 10 times the logarithm (base 10) of the band-center frequency; e.g., band 20 is at 100 Hz and band 30 at 1000 Hz. Visual proportions of spectra (and other frequency characteristics) depend on the ratio of ordinate and abscissa scales. There is no universal or fully standard practice, but for ease of visual comparison of data and of specifications, it has become rather common practice in the United States for 30 dB of ordinate scale to equal (or slightly exceed) 1 decade of logarithmic frequency on the abscissa. Available audio and acoustical graph papers and automatic level-recorder charts have reinforced this practice. When the entire 120-dB range of auditory area is to be included in the graph, the ordinate is often compressed 2:1. Response and Distortion Characteristics Output-input ratios versus frequency are the most common data format in audio-electronics engineering. The audio-frequency scale (20 Hz to 20 kHz) is usually logarithmic. The ordinate may be sound- or electric-output

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.8

SOUND UNITS AND FORMATS 20.8

AUDIO SYSTEMS

level in decibels as the frequency changes with a constant electric or sound input; or it may be a ratio of the output to input (expressed in decibels) as long as they are linearly related within the range of measurement. When the response-frequency characteristic is measured with the input frequency filtered from the output, a distortion-frequency characteristic is the result. It can be further filtered to obtain curves for each harmonic if desired.

Directional Characteristics Sound sources radiate almost equally in all directions when the wavelength is large compared to source dimensions. At higher frequencies, where the wavelength is smaller than the source, the radiation becomes quite directional.

Time Characteristics Any sound property can vary with time. It can build up, decay, or vary in magnitude periodically or randomly. A reverberant sound field decays rather logarithmically. Consequently the sound level in decibels falls linearly when the time scale is linear. The rate of decay in this example is 33 dB/s.

REFERENCES 1. Harris, C. M. (ed.), “Handbook of Acoustical Measurements and Noise Control,” Chaps. 1 and 2, McGraw-Hill, 1991. 2. Acoustical Terminology (Including Mechanical Shock and Vibration), S1.1–1994, Acoustical Society of America, 1994. 3. ANSI Standard S3.4-1980 (R1986).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.9

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 20.2

SPEECH AND MUSICAL SOUNDS Daniel W. Martin, Ronald M. Aarts

SPEECH SOUNDS Speech Level and Spectrum Both the sound-pressure level and the spectrum of speech sounds vary continuously and rapidly during connected discourse. Although speech may be arbitrarily segmented into elements called phonemes, each with a characteristic spectrum and level, actually one phoneme blends into another. Different talkers speak somewhat differently, and they sound different. Their speech characteristics vary from one time or mood to another. Yet in spite of all these differences and variations, statistical studies of speech have established a typical “idealized” speech spectrum. The spectrum level rises about 5 dB from 100 to 600 Hz, then falls about 6, 9, 12, and 15 dB in succeeding higher octaves. Overall sound-pressure levels, averaged over time and measured at a distance of 1 m from a talker on or near the speech axis, lie in the range of 65 and 75 dB when the talkers are instructed to speak in a “normal” tone of voice. Along this axis the speech sound level follows the inverse-square law closely to within about 10 cm of the lips, where the level is about 90 dB. At the lips, where communication microphones are often used, the overall speech sound level typically averages over 100 dB. The peak levels of speech sounds greatly exceed the long-time average level. Figure 20.2.1 shows the difference between short peak levels and average levels at different frequencies in the speech spectrum. The difference is greater at high frequencies, where the sibilant sounds of relatively short duration have spectrum peaks.

Speech Directional Characteristics Speech sounds are very directional at high frequencies. Figure 20.2.2 shows clearly why speech is poorly received behind a talker, especially in nonreflective environments. Above 4000 Hz the directional loss in level is 20 dB or more, which particularly affects the sibilant sound levels so important to speech intelligibility.

Vowel Spectra Different vowel sounds are formed from approximately the same basic laryngeal tone spectrum by shaping the vocal tract (throat, back of mouth, mouth, and lips) to have different acoustical resonance-frequency combinations. Figure 20.2.3 illustrates the spectrum filtering process. The spectral peaks are called formants, and their frequencies are known as formant frequencies.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

20.9

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.10

SPEECH AND MUSICAL SOUNDS 20.10

AUDIO SYSTEMS

The shapes of the vocal tract, simplified models, and the acoustical results for three vowel sounds are shown in Fig 20.2.4. A convenient graphical method for describing the combined formant patterns is shown in Fig 20.2.5. Traveling around this vowel loop involves progressive motion of the jaws, tongue, and lips.

Speech Intelligibility FIGURE 20.2.1 Difference in decibels between peak pressures of speech measured in short (1/8-s) intervals and More intelligibility is contained in the central part of the rms pressure averaged over a long (75-s) interval. speech spectrum than near the ends. Figure 20.2.6 shows the effect on articulation (the percent of syllables correctly heard) when low- and high-pass filters of various cutoff frequencies are used. From this information a special frequency scale has been developed in which each of 20 frequency bands contributes 5 percent to a total articulation index of 100 percent. This distorted frequency scale is used in Fig. 20.2.7. Also shown are the spectrum curves for speech peaks and for speech minima, lying approximately 12 and 18 dB, respectively, above and below the average-speech-spectrum curve. When all the shaded area (30-dB range between the maximum and minimum curves) lies above threshold and below overload, in the absence of noise, the articulation index is 100 percent. If a noise-spectrum curve were added to Fig 20.2.7, the figure would become an articulation-index computation chart for predicting communication capability. For example, if the ambient-noise spectrum coincided with the average-speech-spectrum curve, i.e., the signal-to-noise ratio is 1, only twelve-thirtieths of the shaded area would lie above the noise. The articulation index would be reduced accordingly to 40 percent. Figure 20.2.8 relates monosyllabic word articulation and sentence intelligibility to articulation index. In the example above, for an articulation index of 0.40 approximately 70 percent of monosyllabic words and 96 percent of sentences would be correctly received. However, if the signal-to-noise ratio were kept at unity and the frequency range were reduced to 1000 to 3000 Hz, half the bands would be lost. Articulation index would drop to 0.20, word articulation to 0.30, and sentence intelligibility to 70 percent. This shows the necessity for wide frequency range in a communication system when the signal-to-noise ratio is marginal. Conversely a good signal-to-noise ratio is required when the frequency range is limited. The articulation-index method is particularly valuable in complex intercommunication-system designs involving noise disturbance at both the transmitting and receiving stations. Simpler effective methods have also been developed, such as the rapid speech transmission index (RASTI).

FIGURE 20.2.2 The directional characteristics of the human voice in a horizontal plane passing through the mouth.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/29/04

12:54 PM

Page 20.11

SPEECH AND MUSICAL SOUNDS SPEECH AND MUSICAL SOUNDS

20.11

Speech Peak Clipping Speech waves are often affected inadvertently by electronic-circuit performance deficiencies or limitations. Figure 20.2.9 illustrates two types of amplitude distortion, center clipping and peak clipping. Center clipping, often caused by improper balancing or biasing of a push-pull amplifier circuit, can greatly interfere with speech quality and intelligibility. In a normal speech spectrum the consonant sounds are higher in frequency and lower in level than the vowel sounds. Center clipping tends to remove the important consonants. By contrast peak clipping has little effect on speech intelligibility as long as ambient noise at the talker and system electronic noise are relatively low in level compared with the speech. Peak clipping is frequently used intentionally in speech-communication systems to raise the average transmitted speech level above ambient noise at the listener or to increase the range of a radio transmitter of limited power. This can be done simply by overloading an amplifier stage. However, it is safer for the circuits and it produces less intermodulation distortion when back-to-back diodes are used for clipping ahead of the overload point in the amplifier or transmitter. Figure 20.2.10 shows intelligibility improvement from speech peak clipping when the talker is in quiet and listeners are in noise. Figure 20.2.11 shows that caution is necessary when the talker is in noise, unless the microphone is shielded or is a noise-canceling type. Tilting the speech spectrum by differentiation and flattening it by equalization are effective preemphasis treatments before peak clipping. Both methods put the consonant and vowel sounds into a more balanced relationship before the intermodulation effects of clipping affect voiced consonants. FIGURE 20.2.3 Effects on the Caution must be used in combining different forms of speech-wave distorspectrum of the laryngeal tone produced by the resonances of tion, which individually have innocuous effects on intelligibility but can be the vocal tract.5 devastating when they are combined.

FIGURE 20.2.4 Phonetic symbols, shapes of vocal tract, models, and acoustic spectra for three vowels.6

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.12

SPEECH AND MUSICAL SOUNDS 20.12

AUDIO SYSTEMS

MUSICAL SOUNDS Musical Frequencies The accuracy of both absolute and relative frequencies is usually much more important for musical sounds than for speech sounds and noise. The international frequency standard for music is defined at 440.00 Hz for A4, the A above C4 (middle C) on the musical keyboard. In sound recording and reproduction, the disc-rotation and tape-transport speeds must be held correct within 0.2 or 0.3 percent error (including both recording and playback mechanisms) to be fully satisfactory to musicians. The mathematical musical scale is based on an exact octave ratio of 2:1. The subjective octave slightly exceeds this, and piano tuning sounds better when the scale is stretched very slightly. The equally tempered scale of 12 equal ratios within each octave is an excellent compromise between the different historical scales based on harmonic ratios. It has become the standard of reference, even for individual musical performances, which may deviate from it for artistic or other reasons. Different musical instruments play over different ranges of fundamental frequency, shown in Fig.20.2.12. However, most FIGURE 20.2.5 The center frequencies of the first musical sounds have many harmonics that are audibly signiftwo formants for the sustained English vowels plotted icant to their tone spectra. Consequently high-fidelity record7 to show the characteristic differences. ing and reproduction need a much wider frequency range. Sound Levels of Musical Instruments The sound level from a musical instrument varies with the type of instrument, the distance from it, which note in the scale is being played, the dynamic marking in the printed music, the player’s ability, and (on polyphonic instruments) the number of notes (and stops) played at the same time.

FIGURE 20.2.6 quency.8

Syllable articulation score vs. low- or high-pass cutoff fre-

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.13

SPEECH AND MUSICAL SOUNDS SPEECH AND MUSICAL SOUNDS

FIGURE 20.2.7 Speech area, bounded by speech peak and minimum spectrum-level curves, plotted on an articulation-index calculation chart.9

20.13

FIGURE 20.2.8 Sentence- and word-intelligibility prediction from calculated articulation index.10

Orchestral Instruments. The following sound levels are typical at a distance of 10 ft in a nonreverberant room. Soft (pianissimo) playing of a weaker orchestral instrument, e.g., violin, flute, bassoon, produces a typical sound level of 55 to 60 dB. Fortissimo playing on the same instrument raises the level to about 70 to 75 dB. Louder instruments, e.g., trumpet or tuba, range from 75 dB at pianissimo to about 90 dB at fortissimo. Certain instruments have exceptional differences in sound level of low and high notes. A flute may change from 42 dB on a soft low note to 77 dB on a loud high note, a range of 35 dB. The French horn ranges from 43 dB (soft and low) to 93 dB (loud and high). Sound levels are about 10 dB higher at 3 ft (inverse-square law) and 20 dB higher at 1 ft. The louder instruments, e.g., brass, at closer distances may overload some microphones and preamplifiers.

FIGURE 20.2.9 Two types of amplitude distortion of speech waveform.5

FIGURE 20.2.10 Advantages of peak clipping of noise-free speech waves, heard by listeners in ambient aircraft noise.10

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.14

SPEECH AND MUSICAL SOUNDS 20.14

AUDIO SYSTEMS

Percussive Instruments. The sound levels of shockexcited tones are more difficult to specify because they vary so much during decay and can be excited over a very wide range. A bass drum may average over 100 dB during a loud passage with peaks (at 10 ft) approaching 120 dB. By contrast a triangle will average only 70 dB with 80-dB peaks. A single tone of a grand piano played forte will initially exceed 90 dB near the piano rim, 80 dB at the pianist, and 70 dB at the conductor 10 to 15 ft away. Large chords and rapid arpeggios will raise the level about 10 dB. Instrumental Groups. Orchestras, bands, and polyphonic instruments produce higher sound levels since FIGURE 20.2.11 Effects of speech clipping with both many notes and instruments (or stops) are played togeththe talker and the listener in simulated aircraft noise. er. Their sound levels are specified at larger distances than Note that excessive peak clipping is detrimental.10 10 ft because the sound sources occupy a large area; 20 ft from the front of a 75-piece orchestra the sound level will average about 85 to 90 dB with peaks of 105 to 110 dB. A full concert band will go higher. At a similar distance from the sound sources of an organ (pipe or electronic) the full-organ (or crescendo-pedal) condition will produce a level of 95 to 100 dB. By contrast the softest stop with expression shutters closed may be 45 dB or less.

Growth and Decay of Musical Sounds These characteristics are quite different for different instruments. Piano or guitar tones quickly rise to an initial maximum, then gradually diminish until the strings are damped mechanically. Piano tones have a more rapid decay initially than later in the sustained tone. Orchestral instruments can start suddenly or smoothly, depending on the musician’s technique, and they damp rather quickly when playing ceases. Room reverberation affects both growth and decay rates when the time constants of the room are greater than those of the instrument vibrators. This is an important factor in organ music, which is typically played in a reverberant environment. Many types of musical tone have characteristic transients which influence timbre greatly. In the “chiff” of organ tone the transients are of different fundamental frequency. They appear and decay before steady state is reached. In percussive tones the initial transient is the cause of the tone (often a percussive noise), and the final transient is the result. These transient effects should be considered in the design of audio electronics such as “squelch,” automatic gain control, compressor, and background-noise reduction circuits.

Spectra of Musical Instrument Tones Figure 20.2.13 displays time-averaged spectra for a 75-piece orchestra, a theater pipe organ, a piano, and a variety of orchestral instruments, including members of the brass, woodwind, and percussion families. These vary from one note to another in the scale, from one instant to another within a single tone or chord, and from one instrument or performer to another. For example, a concert organ voiced in a baroque style would have lower spectrum levels at low frequencies and higher at high frequencies than the theater organ shown. The organ and bass drum have the most prominent low-frequency output. The cymbal and snare drum are strongest at very high frequencies. The orchestra and most of the instruments have spectra which diminish gradually with increasing frequency, especially above 1000 Hz. This is what has made it practical to preemphasize the high-frequency components, relative to those at low frequencies, in both disc and tape recording. However, instruments that differ from this spectral tendency, e.g., coloratura sopranos, piccolos, cymbals, create problems of intermodulation distortion, and overload. Spectral peaks occurring only occasionally, for example, 1 percent of the time, are often more important to sound recording and reproduction than the peaks in the average spectra of Fig. 20.2.13. The frequency

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.15

SPEECH AND MUSICAL SOUNDS SPEECH AND MUSICAL SOUNDS

20.15

FIGURE 20.2.12 Range of the fundamental frequencies of voices and various musical instruments. (Ref. 8).

ranges shown in Table 20.2.1 have been found to have relatively large instantaneous peaks for the instruments listed.

Directional Characteristics of Musical Instruments Most musical instruments are somewhat directional. Some are highly so, with well-defined symmetry, e.g., around the axis of a horn bell. Other instruments are less directional because the sound source is smaller than the wavelength, e.g., clarinet, flute. The mechanical vibrating system of bowed string instruments is complex, operating differently in different frequency ranges, and resulting in extremely variable directivity. This is significant for orchestral seating arrangements both in concert halls and recording studios.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.16

SPEECH AND MUSICAL SOUNDS 20.16

AUDIO SYSTEMS

FIGURE 20.2.13 Time-averaged spectra of musical instruments.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.17

SPEECH AND MUSICAL SOUNDS SPEECH AND MUSICAL SOUNDS

20.17

TABLE 20.2.1 Frequency Band Containing Instantaneous Spectral Peaks Band limits, Hz

Instruments

20–60 60–125 125–250 250–500 500–1,000 2,000–3,000 5,000–8,000 8,000–12,000

Theater organ Bass drum, bass viol Small bass drum Snare drum, tuba, bass saxophone, French horn, clarinet, piano Trumpet, flute Trombone, piccolo Triangle Cymbal

Audible Distortions of Musical Sounds The quality of musical sounds is more sensitive to distortion than the intelligibility of speech. A chief cause is that typical music contains several simultaneous tones of different fundamental frequency in contrast to typical speech sound of one voice at a time. Musical chords subjected to nonlinear amplification or transduction generate intermodulation components that appear elsewhere in the frequency spectrum. Difference tones are more easily heard than summation tones because the summation tones are often hidden by harmonics that were already present in the undistorted spectrum and because auditory masking of a high-frequency pure tone by a lower-frequency pure tone is much greater than vice versa. When a critical listener controls the sounds heard (an organist playing an electronic organ on a high-quality amplification system) and has unlimited opportunity and time to listen, even lower distortion (0.2 percent, for example) can be perceived.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Miller, G. A. “Language and Communication,” McGraw-Hill, 1951. Dunn, H. K. J. Acoust. Soc. Am., 1950, Vol. 22, p. 740. Potter, R. K., and G. E. Peterson J. Acoust. Soc. Am., 1948, Vol. 20, p. 528. French, N. R., and J. C. Steinberg J. Acoust. Soc. Am., 1947, Vol. 19, p. 90. Beranek, L. L. “Acoustics,”Acoustical Society of America, 1986. Hawley, M. E., and K. D. Kryter Effects of Noise on Speech, Chap. 9 in C. M. Harris (ed.), “Handbook of Noise Control,” McGraw-Hill, 1957. Olson, H. F. “Musical Engineering,” McGraw-Hill, 1952. Olson, H. F. “Elements of Acoustical Engineering,” Van Nostrand, 1947. Sivian, L. J., H. K. Dunn, and S. D. White IRE Trans. Audio, 1959, Vol. AU-7, p. 47; revision of paper in J. Acoust. Soc. Am., 1931, Vol. 2, p. 33. Hawley, M. E., and K. D. Kryter Effects of Noise on Speech, “Handbook of Noise Control,” McGraw-Hill, 1957.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.18

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 20.3

MICROPHONES, LOUDSPEAKERS, AND EARPHONES Daniel W. Martin

MICROPHONES Sound-Responsive Elements The sound-responsive element in a microphone may have many forms (Fig. 20.3.1). It may be a stretched membrane (a), a clamped diaphragm (b), or a magnetic diaphragm held in place by magnetic attraction (c). In these the moving element is either an electric or magnetic conductor, and the motion of the element creates the electric or magnetic equivalent of the sound directly. Other sound-responsive elements are straight (d) or curves (e) conical diaphragms with various shapes of annular compliance rings, as shown. The motion of these diaphragms is transmitted by a drive rod from the conical tip to a mechanical transducer below. Other widely used elements are a circular piston (f) bearing a circular voice coil of smaller diameter and a corrugated-ribbon conductor (g) of extremely low mass and stiffness suspended in a magnetic field.

Transduction Methods Microphones have a great variety of transduction methods shown in Fig. 20.3.2. The loose-contact transducer (Fig. 20.3.2a) was the first achieved by Bell in magnetic form and later made practical by Edison’s use of carbonized hard-coal particles. It is widely used in telephones. Its chief advantage is its self-amplifying function, in which diaphragm amplitude variations directly produce electric resistance and current variations. Disadvantages include noise, distortion, and instability. Moving-iron transducers have great variety, ranging from the historic pivoted armature (Fig. 20.3.2b) to the modern ring armature driven by a nonmagnetic diaphragm (Fig. 20.3.2h). In all these types a coil surrounds some portion of the magnetic circuit. The reluctance of the magnetic circuit is varied by motion of the soundresponsive element, which is either moving iron itself (Fig. 20.3.2c and d ) or is coupled mechanically to the moving iron (Fig. 20.3.2e–h). In some of the magnetic circuits that portion of the armature surrounded by the coil carries very little steady flux, operating on differential magnetic flux only. Output voltage is proportional to moving-iron velocity. Electrostatic transducers (Fig. 20.3.2i) use a polarizing potential and depend on capacitance variation between the moving diaphragm and a fixed electrode for generation of a corresponding potential difference. The electret microphone is a special type of electrostatic microphone that holds polarization indefinitely without continued application of a polarizing potential, an important practical advantage for many applications.

20.18

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.19

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.19

FIGURE 20.3.1 Sound-responsive elements in microphones.1

Piezoelectric transducers (Fig. 20.3.2j) create an alternating potential through the flexing of crystalline elements which, when deformed, generate a charge difference proportional to the deformation on opposite surfaces. Because of climatic effects and high electric impedance the rochelle salt commonly used for many years has been superseded by polycrystalline ceramic elements and by piezoelectric polymer. Moving-coil transducers (Fig. 20.3.2k) generate potential by oscillation of the coil within a uniform magnetic field. The output potential is proportional to coil velocity.

FIGURE 20.3.2 Microphone transduction methods.1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.20

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.20

AUDIO SYSTEMS

FIGURE 20.3.3 Equivalent basic elements in electrical, acoustical, and mechanical systems.2

FIGURE 20.3.4 Helmholtz resonator in (a) perspective and (b) in section and (c) equivalent electric circuit.2

Equivalent Circuits Electronics engineers understand electroacoustic and electromechanical design better with the help of equivalent or analogous electric circuits. Microphone design provides an ideal base for introduction of equivalent circuits because microphone dimensions are small compared with acoustical wavelengths over most of the audio-frequency range. This allows the assumption of lumped circuit constants. Figure 20.3.3 shows equivalent symbols for the three basic elements of electrical, acoustical, and mechanical systems. In acoustical circuits the resistance is air friction or viscosity, which occurs in porous materials or narrow slots. Radiation resistance is another form of acoustical damping. Mechanical resistance is friction. Mass in the mechanical system is analogous to electric inductance. The acoustical equivalent is the mass of air in an opening or constriction divided by the square of its cross-sectional area. The acoustical analog of electric capacitance and mechanical-spring compliance is acoustical capacitance. It is the inverse of the stiffness of an enclosed volume of air under pistonlike action. Acoustical capacitance is proportional to the volume enclosed. Figure 20.3.4 is an equivalent electric circuit for a Helmholtz resonator. Sound-pressure and air-volume current are analogous to electric potential and current, respectively. Other analog systems have been proposed. One frequently used has advantages for mechanical systems.

Microphone Types and Equivalent Circuits Different types of microphone respond to different properties of the acoustical input wave. Moreover, the electric output can be proportional to different internal mechanical variables. Pressure Type, Displacement Response. Figure 20.3.5 shows a microphone responsive to the sound-pressure wave acting through a resonant acoustical circuit upon a resonant diaphragm coupled to a piezoelectric element responsive to displacement. (The absence of sound ports in the case or in the diaphragm keeps the microphone pressure responsive.) In the equivalent circuit the sound pressure is the generator. La and Ra represent

FIGURE 20.3.5 Pressure microphone, displacement response.1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.21

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.21

FIGURE 20.3.6 Pressure microphone, velocity response.1

the radiation impedance. Ls and Rs are the inertance and acoustical resistance of the holes; Cs is the capacitance of the volume in front of the diaphragm; Lm, Cm, and Rm are the mass, compliance, and resistance of the piezoelectric element and diaphragm lumped together; and Cb is the capacitance of the entrapped back volume of air. The electric output is the potential differential across the piezoelectric element. It is shown across the capacitance in the equivalent circuit because microphones of this type are designed to be stiffness-controlled throughout most of their operating range. Pressure Type, Velocity Response. Figure 20.3.6 shows a moving-coil pressure microphone, which is a velocity-responsive transducer. In this microphone three acoustical circuits lie behind the diaphragm. One is behind the dome and another behind the annular rings. The third acoustical circuit lies beyond the acoustical resistance at the back of the voice-coil gap and includes a leak from the back chamber to the outside. This microphone is resistance-controlled throughout most of the range, but at low frequencies its response is extended by the resonance of the third acoustical circuit. Output potential is proportional to the velocity of voice-coil motion. Pressure-Gradient Type, Velocity Response. When both sides of the sound-responsive element are open to the sound wave, the response is proportional to the gradient of the pressure wave. Figure 20.3.7 shows a ribbon conductor in a magnetic field with both sides of the ribbon open to the air. In the equivalent circuit there are two generators, one for sound pressure on each side. Radiation resistance and reactance are in series with each generator and the circuit constants of the ribbon. Usually the ribbon resonates at a very low frequency, making its mechanical response mass-controlled throughout the audio-frequency range. The electric output is proportional to the conductor velocity in the magnetic field. Gradient microphones respond differently to distant and close sound FIGURE 20.3.7 Gradient microphone, velocity response.1 sources.

Directional Patterns and Combination Microphones Because of diffraction, a pressure microphone is equally responsive to sound from all directions as long as the wavelength is larger than microphone dimensions (see Fig. 20.3.8a). (At high frequencies it is somewhat directional along the forward axis of diaphragm or ribbon motion.) By contrast a pressure-gradient microphone has a figure-eight directional pattern (Fig. 20.3.8b), which rotates about the axis of ribbon or diaphragm motion. A sound wave approaching a gradient microphone at 90° from the axis produces balanced pressure on the two sides of the ribbon and consequently no response.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.22

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.22

AUDIO SYSTEMS

FIGURE 20.3.8 Directional patterns of microphones: (a) nondirectional; (b) bidirectional; (c) unidirectional.2

This defines the null plane of a gradient microphone. Outside this plane the microphone response follows a cosine law. If the pressure and gradient microphones are combined in close proximity (see Fig. 20.3.9) and are connected electrically to add in equal (half-and-half) proportions, a heart-shaped cardioid pattern (Fig. 20.3.8c) is obtained. (The back of the ribbon in the pressure microphone is loaded by an acoustical resistance line.) By combining the two outputs in other proportions other limacon directional patterns can be obtained.

Phase-Shift Directional Microphones Directional characteristics similar to those of the combination microphones can also be obtained with a single moving element by means of equivalent circuit analysis using acoustical phase-shift networks. Figure 20.3.10 shows a moving-coil, phase-shift microphone and its simplified equivalent circuit. The phase-shift network is composed of the rear-port resistance R2 and inertance L2, the capacitance of the volume under the diaphragm and within the magnet, and the impedance of the interconnecting screen. The microphone has a cardioid directional pattern.

Special-Purpose Microphones Special-purpose microphones include two types that are superdirectional, two that overcome noise, and one without cables. Line microphones use an approximate line of equally spaced pickup points connected through acoustically damped tubes to a common microphone diaphragm. The phase relationships at these points for an incident plane wave combine to give a sharply directional pattern along the axis if the line segment is at least one wavelength. Parabolic microphones face a pressure microphone unit toward a parabolic reflector at its focal point, where sounds from distant sources along the axis of the parabola converge. They are effective for all wavelengths smaller than the diameter of the reflector. Noise-canceling microphones are gradient microphones in which the mechanical system is designed to be stiffness-controlled rather than mass-controlled. For distant sound sources the resulting response is greatly attenuated at low frequencies. However, for a very close sound source, the response-frequency characteristic is uniform because the gradient of the pressure wave near a point source decreases with increasing frequency. Such a microphone provides considerable advantage for nearby speech over distant noise on the axis of the microphone.

FIGURE 20.3.9 Combination unidirectional microphone.1

FIGURE 20.3.10 Phase-shift unidirectional microphone.1

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.23

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.23

Contact microphones are used on string and percussion musical instruments, on seismic-vibration detectors, and for pickup of body vibrations including speech. The throat microphone was noted for its convenience and its rejection of airborne noise. Most types of throat microphone are inertia-operated, the case receiving vibration from the throat walls actuated by speech sound pressure in the throat. The disadvantage is a deficiency of speech sibilant sounds received back in the throat from the mouth. Wireless microphones have obvious operational advantages over those with microphone cords. A wireless microphone contains a small, low-power radio transmitter with a nearby receiver connected to an audio communication system. Any of the microphone types can be so equipped. The potential disadvantage is in rf interference and field effects.

Microphone Use in Recordings The choice of microphone type and placement greatly affects the sound of a recording. For speech and dialogue recordings pressure microphones are usually placed near the speakers in order to minimize ambientnoise pickup and room reverberation. Remote pressure microphones are also used when a maximum room effect is desired. In the playback of monophonic recordings room effects are more noticeable than they would have been to a listener standing at the recording microphone position because single-microphone pickup is similar to singleear (monaural) listening, in which the directional clues of localization are lost. Therefore microphones generally need to be closer in a monophonic recording than in a stereophonic recording. In television pickup of speech, where a boom microphone should be outside the camera angle, unidirectional microphones are often used because of their greater ratio of direct to generally reflected sound response. Both velocity (gradient) microphones and unidirectional microphones can be used to advantage in broadcasting and recording. Figure 20.3.11a shows how instruments may be placed around a figure-eight directivity pattern to balance weaker instruments 2 and 5 against stronger instruments 1 and 3 with a potential noise source at point 4. In Fig. 20.3.11b source 2 is favored, with sources 1 and 3 somewhat reduced and source 4 highly discriminated against by the cardioid directional pattern. In Fig. 20.3.11c an elevated unidirectional microphone aimed downward responds uniformly to sources on a circle around the axis while discriminating against mechanical noises at ceiling level. Figure 20.3.11d places the camera noise in the null plane of a figure-eight pattern, and Fig. 20.3.11e shows a similar use for the unidirectional microphone. Camera position is less critical for the cardioid microphone than for the gradient microphone. Early classical stereo recordings used variations of two basic microphone arrangements. In one scheme two unidirectional microphones were mounted close together with their axes angled toward opposite ends of the sound field to be recorded. This retained approximately the same arrival time and phase at both microphones, depending chiefly on the directivity patterns to create the sound difference in the two channels. In the second scheme the two microphones (not necessarily directional) were separated by distances of 5 to 25 ft, FIGURE 20.3.11 Use of directional microphones.2 depending on the size of the sound field to be recorded. Microphone axes (if directional) were again directed toward the ends of the sound field or group of sound sources. In this arrangement the time of arrival and phase differences were more important, and the effect of directivity was lessened. Each approach had its advantages and disadvantages.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.24

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.24

AUDIO SYSTEMS

With the arrival of tape recorders having many channels a trend has developed toward the use of more microphones and closer microphone placement. This offers much greater flexibility in mixing and rerecording, and it largely removes the effect of room reverberation from the recording. This may be either an advantage or a disadvantage depending on the viewpoint. Reverberation can be added later. In sound-reinforcement systems for dramatic productions and orchestras the use of many microphones again offers operating flexibility. However, it also increases the probability of operating error, increased system noise, and acoustical feedback, making expert monitoring and mixing of the microphone outputs necessary. An attractive alternative for multimicrophone audio systems is the use of independent voice-operated electronic control switches in each microphone channel amplifier, in combination with an automatic temporary reduction of overall system gain as more channels switch on, in order to prevent acoustical feedback. Automatic mixers have been devised to minimize speech signal dropouts, and to prevent the inadvertent operation of channel control switches by background noises.

Microphone Mounting On podiums and lecterns microphones are typically mounted on fixed stands with adjustable arms. On stages they are mounted on adjustable floor stands. In mobile communication and in other situations where microphone use is occasional, handheld microphones are used during communication and are stowed on hangers at other times. For television and film recording, where the microphone must be out of camera sight, the microphones are usually mounted on booms overhead and are moved about during the action to obtain the best speech-to-noise ratio possible at the time. In two-way communication situations which require the talker to move about or to turn his head frequently, the microphone can be mounted on a boom fastened to his headset. This provides a fixed close-talking microphone position relative to the mouth, a considerable advantage in high-ambient-noise levels.

Microphone Accessories Noise shields are needed for microphones in ambient noise levels exceeding 110 dB. Noise shields are quite effective at high frequencies, where the random-noise discrimination of noise-canceling microphones diminishes. Noise shields and noise-canceling microphones complement each other. Windscreens are available for microphone use in airstreams or turbulence. Without them aerodynamically induced noise is produced by turbulence at the microphone grille or openings. Large windscreens are more effective than small ones because they move the turbulence region farther from the microphone. Special sponge-rubber mountings for the microphone and cable to reduce extraneous vibration of the microphone are often used. Many microphone stands and booms have optional suspension mounting accessories to reduce shock and vibration transmitted through the stand or boom to the microphone.

Special Properties of Microphones The source impedance of a microphone is important not only to the associated preamplifier but also to the allowable length of microphone cable and the type and amount of noise picked up by the cable. High-impedance microphones (10 kΩ or more) cannot be used more than a few feet from the preamplifier without pickup from stray fields. Microphones having an impedance of a few ohms or less are usually equipped with stepup transformers to provide a line impedance in the range of 30 to 600 Ω, which extensive investigation has established as the most noise-free line-impedance range. The microphone unit itself can be responsive to hum fields at power-line frequencies unless special design precautions are taken. Most microphones have a hum-level rating based on measurement in a standard alternating magnetic field. For minimum electrical noise balanced and shielded microphone lines are used, with the shield grounded only at the amplifier end of the line.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.25

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.25

Microphone linearity should be considered when the sound level exceeds 100 dB, a frequent occurrence for loud musical instruments and even for close speech. Close-talking microphones, especially of the gradient type, are particularly susceptible to noise from breath and plosive consonants. Specifications Microphone specifications typically include many of the following items: type or mode of operation, directivity pattern, frequency range, uniformity of response within the range, output level at one or more impedances for a standard sound-pressure input (for example, 1 Pa or 10 dyn/cm2), recommended load impedance, hum output level for a standard magnetic field (for example, 10–3 G), dimensions, weight, finish, mounting, power supply (if necessary), and accessories.

LOUDSPEAKERS Introduction A loudspeaker is an electroacoustic transducer intended to radiate acoustic power into the air, with the acoustic waveform equivalent to the electrical input waveform. An earphone is an electroacoustic transducer intended to be closely coupled acoustically to the ear. Both the loudspeaker and earphone are receivers of audio-electronic signals. The principal distinction between them is the acoustical loading. An earphone delivers sound to air in the ear. A loudspeaker delivers sound indirectly to the ear through the air. The transduction methods of loudspeakers and earphones are historically similar and are treated together. An overview of loudspeaker developments of the closing 50 years of the last millennium is given by Gander.3 However, since loudspeakers operate primarily into radiation resistance and earphones into acoustical capacitance, the design, measurement, and use of the two types of electroacoustic transducers will be discussed separately. Transduction Methods Early transducers for sound reproduction were of the mechanoacoustic type. Vibrations received by a stylus in the undulating groove of a record were transmitted to a diaphragm, placed at the throat of a horn for better acoustical impedance matching to the air, all without the aid of electronics. Electro-acoustics and electronics introduced many advantages and a variety of transduction methods including moving-coil, moving-iron, electrostatic, magnetostrictive, and piezoelectric (Fig. 20.3.12). Most loudspeakers are moving-coil type today, although moving-iron transducers were once widely used. Electrostatic loudspeakers are used chiefly in the upper range of audio frequencies, where amplitudes are small. Magnetostrictive and piezoelectric loudspeakers are used for underwater sound. All the transducer types are used in earphones except magnetostrictive. FIGURE 20.3.12 Loudspeaker (and earphone) transduction methods: (a) moving-coil; (b) moving-iron; (c) electrostatic; (d ) magnetostrictive; (e) piezoelectric.4

Moving-Coil. The mechanical force on the moving coil of Fig. 20.3.12a is developed by the interaction of the current in the coil and the transverse magnetic field disposed radially across the gap between the magnet cap and the iron housing, which completes the magnetic circuit. The output force along the axis of the circular coil is applied to a sound radiator. Moving-iron transducers reverse the mechanical roles of the coil and the iron. The iron armature surrounded by the stationary coil is moved by mechanical forces developed within the magnetic circuit. Moving-iron

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.26

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.26

AUDIO SYSTEMS

magnetic circuits have many forms. As an example in the balanced armature system (Fig. 20.3.12b) the direct magnetic flux passes only transversely through the ends of the armature centered within the two magnetic gaps. Coil current polarizes the armature ends oppositely, creating a force moment about the pivot point. The output force is applied from the tip of the armature to an attached sound radiator. In a balanced-diaphragm loudspeaker the armature is the radiator. Electrostatic. In the electrostatic transducer (Fig. 20.3.12e) there is a dc potential difference between the conductive diaphragm and the stationary perforated plate nearby. Audio signals applied through a blocking capacitor superimpose an alternating potential, resulting in a force upon the diaphragm, which radiates sound directly. Magnetostrictive transducers (Fig. 20.3.12d) depend on length fluctuations of a nickel rod caused by variations in the magnetic field. The output motion may be radiated directly from the end of the rod or transmitted into the attached mechanical structure. Piezoelectric transducers are of many forms using crystals or polycrystalline ceramic materials. In simple form (Fig. 20.3.12e) an expansion-contraction force develops along the axis joining the electrodes through alternation of the potential difference between them. Sound Radiators The purpose of a sound radiator is to create small, audible air-pressure variations. Whether they are produced within a closed space by an earphone or in open air by a loudspeaker, the pressure variations require air motion or current. Pistons, Cones, Ports. Expansion and contraction of a sphere is the classical configuration but most practical examples involve rectilinear motion of a piston, cone, or diaphragm. In addition to the primary direct radiation from moving surfaces, there is also indirect or secondary radiation from enclosure ports or horns to which the direct radiators are acoustically coupled. Attempts have been made to develop other forms of sound radiation such as oscillating airstreams and other aerodynamic configurations with incidental use, if any, of moving mechanical members. Directivity. Figure 20.3.13 shows the directional characteristics of a rigid circular piston for different ratios of piston diameter and wavelength of sound. (In three dimensions these curves are symmetrical around the axis of piston motion.) For a diameter of one-quarter wavelength the amplitude decreases 10 percent (approximately

FIGURE 20.3.13 Directional characteristics of rigid circular pistons of different diameters or at different sound wavelengths.2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.27

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.27

FIGURE 20.3.14 Directional characteristics of two equal small in-phase sound sources separated by different distances or different sound wavelengths.2

1 dB sound level) at 90° off axis. For a four-wavelength diameter the same drop occurs in only 5°. (The beam of an actual loudspeaker cone is less sharp than this at high frequencies, where the cone is not rigid.) Note that all the polar curves are smooth when the single-source piston vibrates as a whole. Radiator Arrays. When two separate, identical small-sound sources vibrate in phase, the directional pattern becomes narrower than for one source. Figure 20.3.14 shows that for a separation of one-quarter wavelength the two-source beam is only one-half as wide as for a single piston. At high frequencies the directional pattern becomes very complex. (In three dimensions these curves become surfaces of revolution about the axis joining the two sources.) Arrays of larger numbers of sound radiators in close proximity are increasingly directional. Circular-area arrays have narrow beams which are symmetrical about an axis through the center of the circle. Line arrays, e.g., column loudspeakers, are narrowly directional in planes containing the line and broadly directional in planes perpendicular to the line. Direct-Radiator Loudspeakers Most direct-radiator loudspeakers are of the moving-coil type because of simplicity, compactness, and inherently uniform response-frequency trend. The uniformity results from the combination of two simple physical principles: (1) the radiation resistance increases with the square of the frequency, and hence the radiated sound power increases similarly for constant velocity amplitude of the piston or cone; (2) for a constant applied force (voice-coil current) the mass-controlled (above resonance) piston has a velocity amplitude which decreases with the square of the frequency. Consequently a loudspeaker designed to resonate at a low frequency combines decreasing velocity with increasing radiation resistance to yield a uniform response within the frequency range where the assumptions hold.

FIGURE 20.3.15 (a) Structure, (b) electric circuit, and (c) equivalent mechanical circuit for a direct-radiator moving-coil loudspeaker in a baffle.4

Equivalent Electric Circuits. Figure 20.3.15 shows a cross-sectional view of a direct-radiator loudspeaker mounted in a baffle, the electric voice-coil circuit, and the equivalent electric circuit of the mechanoacoustic system. In the voice-coil circuit e is the emf and REG the resistance of the generator, e.g., power-amplifier output, L and REC are the inductance and resistance of the voice coil. ZEM is the motional electric impedance from the mechanoacoustic system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.28

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.28

AUDIO SYSTEMS

FM is the driving force resulting from interaction of the voice-coil current field with the gap magnetic field. MC is the combined mass of the cone and voice coil. CMS is the compliance of the cone-suspension system. RMS is the mechanical resistance. The mass MA and radiation resistance RMA of the air load complete the circuit. Figure 20.3.16 summarizes these mechanical impedance factors for a 4-in direct-radiator loudspeaker of conventional design. Above resonance (where the reactance of the suspension system equals the reactance of the cone-coil combination) the impedance-frequency characteristic is dominated by MC. From the resonance frequency of about 150 Hz to about 1500 Hz the conditions for uniform response hold. Efficiency. Since RMA is small compared to the magnitudes of the reactive components, the efficiency of the loudspeaker in this frequency range can be expressed as Efficiency =

FIGURE 20.3.16 Components of a mechanical impedance of a typical 4-in loudspeaker.4

100(Bl )2 RMA percent REC ( X MA + X MC )2

(1)

where B = gap flux density (G) l = voice-coil conductor length (cm) REC = voice-coil electric resistance (abohms)

Since RMA is proportional to the square of the frequency and both XMA and XMC increase with frequency, the efficiency is theoretically uniform. All this has assumed that the cone moves as a whole. Actually wave motion occurs in the cone. Consequently at high frequencies the mass reactance is somewhat reduced (as shown in the dashed curve of Fig. 20.3.16), tending to improve efficiency beyond the frequency where radiation resistance becomes uniform. Magnetic Circuit. Most magnets now are a high-flux, high-coercive permanent type, either an alloy of aluminum, cobalt, nickel, and iron, or a ferrite of iron, cobalt, barium, and nickel. The magnet may be located in the core of the structure or in the ring, or both. However, magnetization is difficult when magnets are oppositely polarized in the core and ring. Air-gap flux density varies widely in commercial designs from approximately 3000 to 20,000 G. Since most of the reluctance in the magnetic circuit resides in the air gap, the minimum practical voice-coil clearance in the gap compromises the maximum flux density. Pole pieces of heat-treated soft nickel-iron alloys, dimensionally tapered near the gap, are used for maximum flux density. Voice Coils. The voice coil is a cylindrical multilayer coil of aluminum or copper wire or ribbon. Aluminum is used in high-frequency loudspeakers for minimum mass and maximum efficiency. Voice-coil impedance varies from 1 to 100 Ω with 4, 8, and 16 Ω standard. For maximum efficiency the voice-coil and cone masses are equal. However, in large loudspeakers the cone mass usually exceeds the voice-coil mass. Typically the voice-coil mass ranges from tenths of a gram to 5 g or more. Cones. Cone diameters range from 1 to 18 in. Cone mass varies from tenths of a gram to 100 g or more. Cones are made of a variety of materials. The most common is paper deposited from pulp on a wire-screen form in a felting process. For high-humidity environment cones are molded from plastic materials, sometimes with a cloth or fiber-glass base. Some low-frequency loudspeaker cones are molded from low-density plastic foam to achieve greater rigidity with low density. So far piston action has been assumed in which the cone moves as a whole. Actually at high frequencies the cone no longer vibrates as a single unit. Typically there is a major dip in response resulting from quarter-wave

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.29

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.29

FIGURE 20.3.17 Typical cone and coil design values.5

reflection from the circular rim of the cone back to the voice coil. For loudspeaker cones in the range of 8 to 15 in. diameter this dip usually occurs in the range of 1 to 2 kHz. Typical Commercial Design Values. Figure 20.3.17 shows typical values for several cone and voice-coil design parameters for a range of loudspeaker diameters. These do not apply to extreme cases, such as highcompliance loudspeakers or high-efficiency horn drivers. The effective piston diameter (Fig. 20.3.17a) is less than the loudspeaker cone diameter because the amplitude falls off toward the edges. A range of resonance frequencies is available for any cone diameter, but Fig. 20.3.17b shows typical values. In Fig. 20.3.17c typical cone mass is M including the voice coil and M′ excluding the voice coil. Figure 20.3.17d shows typical conesuspension compliance. Impedance. A major peak results from motional impedance at primary mechanical resonance. Impedance is usually uniform above this peak until voice-coil inductance becomes dominant over resistance. Power Ratings. Different types of power rating are needed to express the performance capabilities of loudspeakers. The large range of typical loudspeaker efficiency makes the acoustical power-delivering capacity quite important. The electrical power-receiving capacity (without overload or damage) determines the choice of power amplifier. Loudspeaker efficiencies are seldom measured but are often compared by measuring the sound-pressure level at 4 ft on the loudspeaker axis for 1-W audio input. High-efficiency direct radiators provide 95 to 100 dB. Horn loudspeakers are typically higher by 10 dB or more, being both more efficient and more directional. Loudspeakers are also rated by the maximum rms power output of amplifiers which will not damage the loudspeaker or drive it into serious distortion on peaks. Such ratings usually assume that the amplifier will seldom be driven to full power. For example, a 30-W amplifier will seldom be required to deliver more than 10 W rms of music program material. Otherwise music peaks would be clipped and sound distorted. However, in speech systems for high-ambient-noise levels the speech peaks may be clipped intentionally, causing the loudspeaker to receive the full 30 W much of the transmission time. Then the loudspeaker must handle large excursions without mechanical damage to the cone suspension and without destroying the cemented coil or charring the form. Distortion. Nonlinear distortion in a loudspeaker is inherently low in the mass-controlled range of frequencies. However, distortion is produced by nonlinear cone suspension at low frequencies, voice-coil motion

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.30

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.30

AUDIO SYSTEMS

beyond the limits of uniform air-gap flux, Doppler shift modulation of high-frequency sound by large cone velocity at low frequencies, and nonlinear distortion of the air near the cone at high powers (particularly in horn drivers). Methods for controlling these distortions follow. 1. When a back enclosure is added to a loudspeaker, the acoustical capacitance of the enclosed volume is represented by an additional series capacitor in the mechanical circuit of Fig. 20.3.15. Insufficient volume stiffens the cone acoustically, raising the resonance frequency and limiting the low-frequency range of the loudspeaker. It is convenient to reduce nonlinear distortion at low frequencies by increasing the cone-suspension compliance and depending on the back enclosure to provide the system stiffness. Since an enclosed volume is more linear than most mechanical springs, this lowers low-frequency distortion. 2. Distortion from inhomogeneity of the air-gap flux can be reduced by making the voice-coil length either considerably smaller or larger than the gap width. This stabilizes the total number of lines passing through the coil, but it also reduces loudspeaker efficiency. 3. Doppler distortion can be eliminated only by separating the high and low frequencies in a multiple loudspeaker system. 4. Air-overload distortion can be avoided by increasing the radiating area.

Loudspeaker Mountings and Enclosures Figure 20.3.18 shows a variety of mountings and enclosures. An unbaffled loudspeaker is an acoustic doublet for wavelengths greater than the rim diameter. In this frequency range the acoustical power output for constant cone velocity is proportional to the fourth power of the frequency. Baffles. In order to improve efficiency at low frequencies it is necessary to separate the front and back waves. Figure 20.3.18a is the simplest form of baffle. The effect of different baffle sizes is given in Fig. 20.3.19. Response dips occurring when the acoustic path from front to back is a wavelength are eliminated by irregular baffle shape or off-center mounting. Enclosures. The widely used open-back cabinet (Fig. 20.3.18b) is noted for a large response peak produced by open-pipe acoustical resonance. A closed cabinet (Fig. 20.3.18c) adds acoustical stiffness at low frequencies where the wavelength is larger than the enclosure. At higher frequencies the internal acoustical resonances create response irregularities requiring internal acoustical absorption.

FIGURE 20.3.18 Mountings and enclosures for direct-radiator loudspeaker: (a) flat baffle; (b) open-back cabinet; (c) closed cabinet; (d) ported closed cabinet; (e) labyrinth; (f) folded horn.4

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.31

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

FIGURE 20.3.19 Response frequency for loudspeaker in 2-, 3-, 4-, and 6-ft square baffles.6

20.31

FIGURE 20.3.20 Response frequency for loudspeaker in closed (A) and ported (B) cabinets.6

Ported Enclosures (Fig. 20.3.18d). Enclosure volume can be minimized without sacrificing low-frequency range by providing an appropriate port in the enclosure wall. Acoustical inertance of the port should resonate with the enclosure capacitance at a frequency about an octave below cone-resonance frequency. B, Fig. 20.3.20, shows that this extends the low-frequency range. This is most effective when the port area equals the cone-piston area. Port inertance can be increased by using a duct. An extreme example of ducting is the acoustical labyrinth (Fig. 20.3.18e). When duct work is shaped to increase cross section gradually, the labyrinth becomes a lowfrequency horn (Fig. 20.3.18f ). Direct-radiator loudspeaker efficiency is typically 1 to 5 percent. Small, highly damped types with miniature enclosures may be only 0.1 percent. Transistor amplifiers easily provide the audio power for domestic loudspeakers. However, in auditorium, outdoor, industrial, and military applications much higher efficiency is required.

Horn Loudspeakers Higher efficiency is obtained with an acoustic horn, which is a tube of varying cross section having different terminal areas to provide a change of acoustic impedance. Horns match the high impedance of dense diaphragm material to the low air impedance. Horn shape or taper affects the acoustical transformer response. Conical, exponential, and hyperbolic tapers have been widely used. The potential low-frequency cutoff of a horn depends on its taper rate. Impedance transforming action is controlled by the ratio of mouth to throat diameter. Horn Drivers. Figure 20.3.21 shows horn-driving mechanisms and straight and folded horns of large- and small-throat types. A large-throat driver (Fig. 20.3.21a) resembles a direct-radiator loudspeaker with a voicecoil diameter of 2 to 3 in. and a flux density around 15,000 G. A small-throat driver (Fig. 20.3.21b) resembles a moving-coil microphone structure. Radiation is taken from the back of the diaphragm into the horn throat through passages which deliver in-phase sound from all diaphragm areas. Diaphragm diameters are 1 to 4 in. with throat diameters of 1/4 to 1 in. Flux density is approximately 20,000 G. Large-Throat Horns. These are used for low-frequency loudspeaker systems. A folded horn (Fig. 20.3.21c) is preferred over a straight horn (Fig. 20.3.21d ) for compactness. Small-Throat Horns. A folded horn (Fig. 20.3.21e) with sufficient length and gradual taper can operate efficiently over a wide frequency range. This horn is useful for outdoor music reproduction in a range of 100 to 5000 Hz. Response smoothness is often compromised by segment resonances. Extended high-frequency range requires a straight-axis horn (Fig. 20.3.21f ).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

6:21 PM

Page 20.32

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.32

AUDIO SYSTEMS

FIGURE 20.3.21 Horns and horn drivers: (a) largethroat driver; (b) small-throat driver; (c) folded largethroat horn; (d) straight large-throat horn; (e) folded small-throat horn; ( f ) straight small-throat horn.4

Horn Directivity. Large-mouth horns of simple exponential design produce high-directivity radiation that tends to narrow with increasing frequency (as in Fig. 20.3.13). In applications requiring controlled directivity over a broad angle and a wide frequency range, a horn array (shown in Fig. 20.3.22a) can be used, with numerous small horn mouths spread over a spherical surface and throats converging together. Figure 20.3.22b shows the directional characteristics. Single sectoral horns with radial symmetry can provide cylindrical wavefronts with smoother directional characteristics which are controlled in one plane. Recent rectangular or squaremouth “quadric” horns, designed by computer to have different conical expansion rates in horizontal and vertical planes, provide controlled directivity in both planes over a wide frequency range. Special Loudspeakers Special types of loudspeakers for limited applications include the following. Electrostatic high-frequency units have an effective spacing of about 0.001 in. between a thin metalized coating on plastic and a perforated metal backplate. This spacing is necessary for sensitivity comparable to moving-coil loudspeakers, but it limits the amplitude and the frequency range. Extension of useful response to the lower frequencies can be obtained with larger spacing, for example, 1/16 in., with a polarizing potential of several thousand volts. This type of unit employs push-pull operation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.33

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.33

FIGURE 20.3.22 Horn array (cellular) and directional characteristics: (a) array; (b) horizontal directional curves.6

Modulated-airflow loudspeakers have an electromechanical mechanism for modulating the airstream from a high-pressure pneumatic source into a horn. Low audio power controls large acoustical power in this system. A compressor is also needed. Nonlinear distortion in the air and reduced speech intelligibility have been limitations of this high-power system.

Loudspeaker Specifications and Measurements Typical loudspeaker specifications are shown in Table 20.3.1 for a variety of loudspeaker types. Loudspeaker impedance is proportional to the voltage across the voice coil when driven by a highimpedance constant-current source. Continuous power ratings are obtained from sustained life tests with

TABLE 20.3.1 Characteristics of a Variety of Loudspeaker Types Company Model no. Type

Altec

Altec

Bozak

RCA

775C

1505B horn 290D driver Cellular horn (3 × 5) 110 300–8,000

CM-109–23

LC1B

Three-way column 106 65–13,000

Duo-cone

4 100 105 horizontal 60 vertical 2.8 … 500 181/2 high 301/2 wide 30 43

8 200 90 horizontal 30 vertical (3 sizes) (3 sizes) 800, 2,500 57 in. high 223/4 wide 153/4 250

Direct radiator

Sensitivity (at 4 ft for 1 W), dB Frequency range, Hz

95 40–15,000

Impedance, Ω Power rating, W Distribution angle, deg

8 15 90

Voice-coil diameter, in. Cone resonance, Hz Crossover frequency, Hz Diameter, in.

2 52 … 83/8

Depth, in. Weight, lb

21/4 33/4

95 25–16,000 (± 4 dB) 15 20 120 (2 cones) 22 1,600 17

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

71/2 21

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.34

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.34

AUDIO SYSTEMS

typical audio-program material restricted to the frequency range appropriate for the loudspeaker type. Sensitivity, response-frequency characteristics, frequency range, and directivity are most effectively measured under anechoic conditions using calibrated laboratory microphones and high-speed level recorders. However, data so measured should not be expected to be exactly reproducible under room-listening conditions. Distortion measurements in audio-electronic systems are generally of three types shown in Fig. 20.3.23. For harmonic distortion a single sinusoidal signal A is supplied to the loudspeaker and wave analysis at the harmonic frequencies determines the percent distortion. Both intermodulation methods supply two sinusoidal signals of different frequency to the loudspeaker. In the older Society of Motion Picture and Television Engineers (SMPTE) method the frequencies are widely separated, and the distortion is expressed in terms of sum and difference frequencies around the higher test frequency. This method is meaningful for wide-range loudspeaker systems. The CCIF (International Telephone Consultative Committee) method is more applicable to narrow-range systems and loudspeakers receiving input at high frequencies. It supplies two high frequencies to the loudFIGURE 20.3.23 Methods of measuring nonlinear disspeaker and checks the low difference frequency. tortion: (a) harmonic; (b) intermodulation method of Transient intermodulation distortion, resulting from SMPTE; (c) intermodulation method of CCIF.7 nonlinear response to steep wavefronts, is measured by adding square-wave (3.18-kHz) and sine-wave (15-kHz) inputs, with a 4:1 amplitude ratio, and observing the multiple sum- and difference-frequency components added to the output spectrum.

EARPHONES The transduction methods are the same as for loudspeakers. Telephone and hearing aid receivers are usually moving-iron. Most military headsets are now moving-coil. Piezoelectric, moving-coil, and electrostatic types are used for listening to recorded music.

Equivalent Electric Circuits Figure 20.3.24 shows a cross section of a moving-coil earphone and the equivalent electric circuit. The voicecoil force drives the voice coil and diaphragm. (Mechanical resonance of earphone diaphragms occurs at a high audio frequency in contrast to loudspeakers.) Diaphragm motion creates sound pressure in several spaces behind the diaphragm and the voice coil and between the diaphragm and the earcap. Inertance and resistance of the connecting holes and clearances combine with the capacitance of the spaces to add acoustical resonances. Z is the acoustical impedance of the ear.

Idealized Ear Loading The ear is approximately an acoustical capacitance. However, acoustical leakage adds a parallel resistanceinertance path affecting low-frequency response. At high frequencies the ear canal-length resonance is a factor. Since the ear is a capacitance, the goal of earphone design is a constant diaphragm amplitude throughout the frequency range. This requires a stiffness-controlled system or a high-resonance frequency. The potential across the ear is analogous to sound pressure within the ear cavity. This sound pressure is proportional to

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.35

MICROPHONES, LOUDSPEAKERS, AND EARPHONES MICROPHONES, LOUDSPEAKERS, AND EARPHONES

20.35

FIGURE 20.3.24 Moving-coil earphone cross section and equivalent electric circuit.8

diaphragm area and inversely proportional to enclosed volume. Earphone loading conditions are extremely varied for different types of earphone mountings. Earphone Mountings The most widely used earphone is the single receiver unit on a telephone handset. It is intended to be held against the ear but is often tilted away, leaving considerable leakage. Headsets provide better communication than handsets because they supply sound to both ears and shield them. A remote earphone can drive the ear canal through a small acoustic tube. The length may be an inch or two for hearing aids and several feet for music listening on aircraft. Efficiency, Impedance, and Driving Circuits Moving-iron earphones and microphones can be made efficient enough to operate as sound-powered (batteryless) telephones. Efficient magnet structures, minimum mechanical and acoustical damping, and minimum volume of acoustical coupling are required for this purpose. In some earphone applications overall efficiency is less critical, and wearer comfort is important. Insert earphones need less efficiency than external earphones because the enclosed volume is much smaller; however, they require moderate efficiency to save the amplifier batteries. Circumaural earphones are frequently driven by amplifiers otherwise used for loudspeakers. Here efficiency is less important than power-delivering capacity. Typically 1 mW of audio power to an earphone will produce 100 to 110 dB in a standard 6-cm3 coupler. The same earphone will produce less sound level in an earmuff than in an ear cushion and more when coupled to an ear insert. The shape of the enclosed volume also affects response. The farther the driver is from the eardrum the lower the frequency of standing-wave resonance. Small tube diameters FIGURE 20.3.25 Effect of source impedance upon earphone response curve: (a) constant current; (b) conproduce high-frequency attenuation. 9 stant voltage; (c) constant power. The response-frequency characteristic of moving-iron or piezoelectric earphones is quite dependent on source impedance. A moving-iron earphone having uniform response when driven at constant power will have a rising response (with increasing frequency) at constant current and a falling response at constant voltage (Fig. 20.3.25).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.36

MICROPHONES, LOUDSPEAKERS, AND EARPHONES 20.36

AUDIO SYSTEMS

Real-Ear Response The variety of earphone-coupling methods and the variability of outer-ear geometry (among different listeners) make response data from artificial ears only indicative, not definitive. Out of necessity a real-ear responsemeasuring technique was developed. A listener adjusts headset input to match headset loudness to an external calibrated sound wave in an anechoic chamber. From matching data at numerous frequencies an equivalent free-field sound-pressure level can be plotted for constant input to the earphone. This curve usually differs from a sound-level curve on a simple earphone coupler. The reason is that probe measurements of sound at the eardrum and outside the ear in a free field differ FIGURE 20.3.26 Relative level of sound pressures at the because of ear amplification and diffraction about the listener’s eardrum and in the free sound field.10 head (Fig. 20.3.26). Acoustic attenuation by earphones can be measured either by threshold shift or by matching the loudness of tones heard from an external loudspeaker, with and without the headset on. The sound-level difference is plotted as attenuation in decibels.

Monaural, Diotic, and Binaural Listening A handset earphone provides monaural listening. Diotic listening with the same audio signal in both earphones localizes sound within the head. This is not unpleasant and may actually be an aid to concentration. In naturalbinaural listening the ears receive sound differently from the same source unless it is directly on the listening axis. Usually there are differences in phase, arrival time, and spectrum (because of diffraction about the head). Recordings provide true binaural effects only if the two recording microphones are on an artificial head. Stereophonic microphones are usually separated much farther, so that headset listening gives an exaggerated effect. For some listeners this is an enhancement.

REFERENCES 1. Bauer, B. B. Proc. IRE, 1962, Vol. 50, 50th Anniversary Issue, p. 719. 2. Olson, H. F. “Musical Engineering,” McGraw-Hill, 1952. 3. Gander, M. R. Fifty Years of Loudspeaker Developments as Viewed Through the Perspective of the Audio Engineering Society, J. Audio Eng. Soc., 1998, Vol. 46, No. 1/2, pp. 43–58. 4. Olson, H. F. Proc. IRE, 1962, Vol. 50, 50th Anniversary Issue, p. 730. 5. Beranek, L. L. “Acoustics,” Acoustical Society of America, 1986. 6. Olson, H. F. “Elements of Acoustical Engineering,” Van Nostrand, 1947. 7. Beranek, L. L. Proc. IRE, 1962, Vol. 50, p. 767. 8. Anderson, L. J. J. Soc. Motion Pict. Eng., 1941, Vol. 37, p. 319. 9. Martin, D. W., and L. J. Anderson J. Acoust. Soc. Am., 1947, Vol. 19, p. 63. 10. Wiener, F. M., and D. A. Ross J. Acoust. Soc. Am., 1946, Vol. 18, p. 401.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.37

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 20.4

DIGITAL AUDIO RECORDING AND REPRODUCTION Daniel W. Martin, Ronald M. Aarts

INTRODUCTION A digital revolution has occurred in audio recording and reproduction that has made some previous techniques only of historical interest. Although analog recording and reproduction systems have been greatly improved (Fig. 20.4.1), their capabilities are still short of ideal. For example, they could not provide the dynamic range of orchestral instrument sounds (e.g., from 42 dB on a soft low flute note to 120 dB for a bass drum peak), plus a reasonable ratio of weakest signal to background noise. Mechanical analog records are still limited by inherent nonlinear distortions as well as surface noise, and magnetic analog recording is limited by inherent modulation noise. Digital audio signal transmission, recording, and playback have numerous potential advantages, which, with appropriate specifications and quality control, can now be realized as will now be shown.

DIGITAL ENCODING AND DECODING There is much more to digital audio than encoding the analog signal and decoding the digital signal, but this is basic. The rest would be largely irrelevant if it were not both advantageous and practically feasible to convert analog audio signals to digital for transmission, storage, and eventual retrieval. A digital audio signal is a discrete-time, discrete-amplitude representation of the original analog audio signal. Figure 20.4.2 is a simple encoding example using only 4 bits. The amplitude of the continuous analog audio signal wavetrain A is sampled at each narrow pulse in the clock-driven pulse train B, yielding for each discrete abscissa (time) value a discrete ordinate (voltage) value represented by a dot on or near the analog curve. The vertical scale is subdivided (in this example) into 16 possible voltage values, each represented by a binary number or “word.” The first eight words can be read out either in parallel 1000, 1010, 1011, 1011, 1010, 1000, 0110, 0101, … on four channels, or in sequence 10001010101110111010100001100101 … on a single channel for transmission, optional recording and playback, and decoding into an approximation of the original wavetrain. Unless intervening noise approaches the amplitude of the digit 1, the transmitted or played-back digital information matches the original digital information. The degree to which digitization approximates the analog curve is determined by the number of digits chosen and the number of samplings per second. Both numbers are a matter of choice, but the present specifications for digital audio systems generally use 16 bits for uniform quantization (65,536 identifiable values), Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

20.37

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.38

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.38

AUDIO SYSTEMS

FIGURE 20.4.1 Dynamic range of analog tape cartridges and cassettes. (After Ref. 1)

corresponding to a theoretical dynamic range of 16(6 dB) = 96 dB. The sampling frequency, according to the Nyquist criterion, must be at least twice the highest audio frequency to be transmitted or recorded. Three different sampling frequencies are being used, 48 kHz “for origination, processing, and interchange of program material”; 44.1 kHz “for certain consumer applications”; and 32 kHz “for transmission-related applications.” Figure 20.4.3 shows the main electronic blocks of a 5-bit digital system for encoding and decoding audio signals for various transmitting and receiving purposes. The digital audio signal may be transmitted and

FIGURE 20.4.2 Digital encoding of an analog waveform: (a) continuous analog signal wavetrain; (b) clock-driven pulse train. At equal time intervals, sample values are encoded into nearest digital word.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.39

FIGURE 20.4.3 The basic electronic system components for encoding and decoding digital audio signals for (a) transmitting (or recording) and (b) receiving (or playback). (Ref. 2)

DIGITAL AUDIO RECORDING AND REPRODUCTION

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

20.39

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.40

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.40

AUDIO SYSTEMS

received conductively or electromagnetically. Alternatively, it may be stored and later retrieved by recording and playback. Or transmission may simply be into and out of a digital signal processing system, which purposely alters or enhances the signal in a manner not easily accomplished by analog means. In any case the frequency range of the analog input signal must be limited, by the first low-pass filter of Fig. 20.4.3, to less than one-half the sampling frequency. For 16-bit accuracy in digitization this filter probably needs a stop-band attenuation greater than 60 dB, a signal-to-noise ratio of 100 dB, bandpass ripple less than 0.2 dB, and differential nonlinearity less than 0.0075 percent. The next block is a sample-and-hold analog circuit which tracks the input voltage and samples it during a very short portion of the sampling period; it then holds that value during the remainder of the sampling period until the next sampling begins. Possible faults in sampling include timing “jitter,” which adds modulation noise, and “droop” in the held voltages during digitization. The analog-to-digital converter quantizes each of the succession of held voltages shown and turns them into a sequence of binary numbers, the first of which (1000, corresponding to 16) is shown at the outputs of the converter. For practical reasons the parallel output information from the converter is put into sequential form in the transmitting system by multiplexing, for example, before transmission or recording occurs. Demultiplexing in the receiving system puts the data back into parallel form for digital-to-analog conversion. Possible faults in the conversion include gain errors, which increase quantizing error, and nonlinearity or relative nonuniformity, which cause distortion. The second low-pass filter removes the scanning frequency and its harmonics, which, although inaudible themselves, can create audible distortion in the analog output system.

TRANSMISSION AND RECEPTION OF THE DIGITAL AUDIO SIGNAL As previously stated, other digital functions and controls are required to assist in the encoding and decoding. Figure 20.4.4 complements Fig. 20.4.3 by showing in a block diagram that analog-to-digital (A/D) conversion and transmission (or storage) have intervening digital processing and that all three are synchronized under digital clock control. In reception (or playback), equivalent digital control is required for reception, digital reprocessing, and digital-to-analog (D/A) conversion. Examples of these functions and controls are multiplexing of the A/D output, digital processing to introduce redundancy for subsequent error detection and correction, servo system control when mechanical components are involved, and digital processing to overcome inherent transmission line or recording media characteristics. The digitization itself may be performed in any of a number of ways: straightforward, uniform by successive approximations, companding, or differential methods such as delta modulation. Detailed design of much of this circuitry is in the domain of digital-circuit and

FIGURE 20.4.4 Block diagram of the basic functions in a digital audio system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.41

DIGITAL AUDIO RECORDING AND REPRODUCTION DIGITAL AUDIO RECORDING AND REPRODUCTION

20.41

FIGURE 20.4.5 Subframe format recommended for serial transmission of linearly represented digital audio data.

integrated-circuit engineering, beyond the scope of this chapter. However, the audio system engineer is responsible for the selection, specification, and (ultimately) standardization of sampling rate, filter cutoff frequency and rate, number of digits, digitization method, and code error-correction method, in consultation with broadcast, video, and transmission engineers with whose systems compatibility is necessary. Compatibility should be facilitated by following the “Serial Transmission Format for Linearly Represented Digital Audio Data” recommended by the Audio Engineering Society, in which digital audio sample data within a subframe (Fig. 20.4.5) is accompanied by other data bits containing auxiliary information needed for functions and controls such as those listed above. Two 32-bit subframes in sequence, one for each channel (of stereo, for example), comprise a frame transmitted in any one period of the sampling frequency. A channel (or modulation) code of the biphase mark, self-clocking type is applied to the data prior to transmission, in order to embed a data-rate clock signal which enables correct operation of the receiver. In this code all information is contained in the transitions, which simplifies clock extraction and channel decoder synchronization. The audio signal data may occupy either 20 or 24 bits of the subframe, preceded by 4 bits of synchronizing and identifying preamble for designating the start of a frame and block, or the start of the first or the second subframe. If the full 24 bits are not needed for the audio sample, the first four can be auxiliary audio data. Following the audio data are four single bits that indicate (V ) whether the previous audio sample data bits are valid; (U) any information added for assisting the user of the data; (C ) information about system parameters; and (P) parity for detection of transmission errors for monitoring channel reliability. Within the audio field it is the Audio Engineering Society (AES) that has determined many standards. Among these there are a few digital interconnect standards [http://www.aes.org/standards/]. The AES30-1985 document has formed the basis for the international standards documents concerning a two-channel digital audio interface. The society has been instrumental in coordinating professional equipment manufacturers’ views on interface standards although it has tended to ignore consumer applications to some extent, and this is perhaps one of the principal roots of confusion in the field.3 The consumer interface was initially developed in 1984 by Sony and Philips for the CD system and is usually called Sony-Philips digital interface (SPDIF). The interface is serial and self-clocking. The two audio channels are carried in a multiplexed fashion over the same channel and the data are combined with a clock signal in such a way that the clock may be extracted at the receiver side. A further standard was devised, originally called multichannel audio digital interface (MADI), which is based on the AES3 data format and has been standardized as AES10-1991. It is a professional interface that can accommodate up to 56 audio channels. Bluetooth Bluetooth is a low-cost, low-power, short-range radio technology, originally developed as a cable replacement to connect devices.4 An application of Bluetooth is as a carrier of audio information. This functionality allows to build devices such as wireless headsets, microphones, headphones, and cellular phones. The audio quality provided by Bluetooth is the same as one would expect from a cellular telephone.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.42

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.42

AUDIO SYSTEMS

IEEE1394 IEEE1394 is a standard defining a high-speed serial bus. This bus is also named FireWire or i.Link. It is a serial bus similar in principle to UBS, but runs at speeds of up to 400 Mbit/s, and is not centered around a PC (i.e., there may be none or multiple PCs on the same bus). It has a mode of transmission that guarantees bandwidth that makes it ideal for audio transmission digital video cameras and similar devices.

DIGITAL AUDIO TAPE RECORDING AND PLAYBACK The availability of adaptable types of magnetic tape video recorders accelerated digital-audio-recording development in the tape medium. Nippon Columbia had developed a video-type recorder into a PCM tape recorder for eight channels of audio information with each channel sampled at 47.25 kHz. Now numerous manufacturers produce audio tape recorders for professional recording studios, and some large recording companies have developed digital master tape recording systems including digital editors and mixers. An inherent disadvantage of digital recording and playback, especially in the tape medium, has been dropout caused by voids or scratches in the tape. Some dropouts are inevitable, so protective or corrective means are used such as interlacing the encoded signal with redundancy, or reserving and using bits for error-detection schemes, e.g., recording sums of words, for comparison with sums simultaneously calculated from playback of the words. Such error detection can trigger the substitution of adjacent data, for example, into the dropout gap. Digital audio tape recorders are of two different types, helical-scan and multitrack using rotary and stationary heads, respectively. Helical-scan systems already had the needed bandwidth, but improved recording densities and multitrack head stacks allowed multitrack systems to become competitive. A variety of tape formats has been developed. Table 20.4.1 shows part of the specifications for a multitrack professional digital recorder, the Sony PCM-3324. Two new modes of recording on magnetic tape have permitted large increases in lineal density of recording and signal-to-noise ratio, both great advantages for digital magnetic recording. Perpendicular (or vertical) recording (see Fig. 20.4.6a) uses a magnetic film (e.g., CoCr crystallites), which has a preferred anisotropy normal to the surface. In contrast to conventional longitudinal magnetic recording, demagnetization is weak at short wavelengths, increasing the signal amplitude at high frequencies. Another advantage is that sharp transitions between binary states is possible. Vector field recording (Fig. 20.4.6b) with isotropic particles and microgap heads has also led to higher bit densities.

TABLE 20.4.1 Specifications for the PCM-3324 Number of channels (one track per channel): digital audio 24, analog audio 2, time code 1, control 1; total 28 Tape speed, sampling rate:

70.01 cm, 44.1 kHz 76.20 cm /s, 48.0 kHz

  

with ±12.5% vernier

(selectable at recording, automatic switching in playback) Tape: 0.5-in. (12.7-mm) digital audio tape Quantization: 16-bit linear per channel Dynamic range: more than 90 dB Frequency response: 20 Hz to 20 kHz, +0.5, –1.0 dB Total harmonic distortion: less than 0.05% Wow and flutter: undetectable Emphasis: 50 mms/15 ms (EIAJ format and compact disc compatible) Format: DASH-F (fast) Channel coding: HDM-1 Error control: cross-interleave code

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.43

DIGITAL AUDIO RECORDING AND REPRODUCTION DIGITAL AUDIO RECORDING AND REPRODUCTION

20.43

FIGURE 20.4.6 New recording modes: (a) perpendicular recording (CoCr); adjacent dipole fields aid; (b) vector field recording (isotropic medium): longitudinal and perpendicular fields aid at short wavelength.

Digital Compact Cassette (DCC) After intensive research on digital audio tape (DAT), Philips built on this research to develop the digital compact cassette (DCC) recorder (Fig. 20.4.7). To make DCC mechanically (and dimensionally) compatible with analogue cassettes, and their tape mechanisms, the same tape speed of 4.76 cm/s (17/8 in./s) was adopted. At

FIGURE 20.4.7 DCC recorder block diagram (Ref. 5).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.44

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.44

AUDIO SYSTEMS

that tape speed, matching the quality of CD reproduction required compression of the digital audio data stream (1.4 million bits/s) by at least 4 to 1. DCC’s coding system, called precision adaptive sub-band coding (PASC) achieves the compression after digitally filtering the audio frequency range into 32 sub-bands of equal bandwidth. The PASC signal processor adapts its action dynamically in each sub-band (a) by omitting data for sounds lying below the hearing threshold at that frequency and time, and (b) by disregarding data for weak sounds in any sub-band that would be masked (rendered inaudible) at that time by the presence of stronger sounds in adjacent or nearby sub-bands. Bits are reallocated to sub-bands when needed for accuracy, from sub-bands where not needed at the time, to optimize coding accuracy overall. The PASC sequential data, together with error correction codes and other system information are multiplexed into eight channels for recording on eight 185m m-wide tracks. The 3.78 mm wide tape accommodates two sets of eight tracks, one forward and one reverse, alongside auxiliary code data on a separate track providing track and index numbers, time codes, and so forth. In playback the tape output signals are amplified, equalized, demodulated, with error detection and correction. The PASC processor reconstructs the input data that are fed to the digital-to-analogue converter. PASC is compatible with all three existing sampling frequencies, 32, 44.1, and 48 kHz with sub-band widths of 500, 690, and 750 Hz, respectively, and corresponding frame periods of 12, 8.7, and 8 msec. The 18-channel thin-film playback head has magnetoresistive sensors having resistance that varies with the angle between the sensor’s electrical bias current vector and the magnetization vector. Although the recorded digital track is 185 mm wide, the playback sensor is only 70 mm wide, allowing considerable azimuth tolerance. When playing analog audio cassette tapes each audio track uses more than one sensor, as shown, to improve S/N ratio.

DIGITAL AUDIO DISC RECORDING AND PLAYBACK As in video discs, two general types of digital audio discs were developed. One type recorded binary information mechanically or electrically along a spiral groove that provides guidance during playback for a lightly contacting pickup. The second type used optical laser recording of the digital information in a spiral pattern and optical playback means which track the pattern without contacting the disc directly. The optical type now appears to be dominant. Optical Digital Discs The compact disc optical digital storage and reproduction system, a milestone in consumer electronics, was made possible by the confluence of significant progress in each of a number of different related areas of technology. Optical media capable of high storage density had long been available at high cost, but more durable optical surfaces of lower costs, integrated solid-state lasers, and mass-producible optical light pens were all required to permit economical optical recording and playback. Mechanical drive systems of higher accuracy were needed under servocontrol by digital signals. Advanced digital signal processing algorithms, complex electronic circuitry, and very large-scale integration (VLSI) implementation were part of the overall system development. Many research organizations contributed to the state of the art, and in 1980 two of the leaders, Philips and Sony, agreed on standardization of their compact disc optical systems which had been developing along similar but independent paths. On the reflective surface of the compact optical disc is a spiral track of successive shallow depressions or pits. The encoded digital information is stored in the length of the pits and of the gaps between them, with the transitions from pit to gap (or vice versa) playing a key role. The disc angular rotation is controlled for constant linear velocity of track readout on the order of 1.3 m/s. A beam from a solid-state laser, focused on the disc, is reflected, after modulation by the disc track information, to a photodiode that supplies input to the digital processing circuitry. Focusing of the laser spot on the spiral track is servocontrolled. In the compact disc system, as in most storage or transmission of digital data, the A/D conversion data are transformed to cope with the characteristics of the storage medium. Such transformation, called modulation, involves (1) the addition of redundant information to the data, and (2) modulation of the combined data to compensate for medium characteristics (e.g., high-frequency losses). The modulation method for the compact disc system, called eight-to-fourteen modulation (EFM), is an 8-data-bit to 14-channel-bit conversion block code Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.45

DIGITAL AUDIO RECORDING AND REPRODUCTION DIGITAL AUDIO RECORDING AND REPRODUCTION

20.45

FIGURE 20.4.8 Formats in the compact disc encoding system.

with a space of 3 channel bits (called merging bits) for every converted 14 channel bits for connecting the blocks. Figure 20.4.8 shows the format in the compact disc encoding, and Table 20.4.2 the disc specification. The purpose of the redundant information is to be able to detect and correct errors that occur because of storage medium imperfections. It is important to minimize the probability of occurrence of such imperfections. The use of optical noncontacting readout from a signal surface protected by a plastic layer allows most of the signal errors at the surface to be reduced to random errors of several bits or larger burst errors. The errorcorrecting code, the cross-interleave Reed-Solomon code (CIRC), adopted in the standardization provides highly efficient detection and correction for errors of these types. It happens that the EFM modulation method and the CIRC error-correction method used in the compact disc system are well matched. This combination is credited with much of the system’s success. Between tape mastering and replication lies a complex and sophisticated disc mastering process which gets the information into the CD standard format and onto the surface of the CD disc master. Optical disc preparation, recording, development, electroplating, stamping, molding, and protection film coating are the major steps in the highly technological production process. MiniDisc (MD) System For optical digital discs to compete more favorably with digital audio tape, a recordable, erasable medium was needed. Magneto-optical discs combined the erasability of magnetic storage with the large capacity and long life of optical storage. An optical disc, with its digital data stream recordable in a tight spiral pattern, provides rapid track access for selective playback or re-recording. TABLE 20.4.2 Specifications for a Compact Disc Playing time: 75 min Rotating speed: 1.2–1.4 m/s (constant linear velocity) Track pitch: 1.6 mm Disc diameter: 120 mm Disc thickness: 1.2 mm Center hole: 15 mm Signal surface: 50–116f mm (signal starts from inside) Channel number: 2 Quantization: 16-bit linear per channel Sampling rate: 44.1 kHz Data rate: 2.0338 Mb/s Channel bit rate: 4.3218 Mb/s Error protection: CIRC (cross-interleave Reed-Solomon code), redundancy 25% (4/3) Modulation: EFM (eight-to-fourteen modulation)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.46

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.46

AUDIO SYSTEMS

FIGURE 20.4.9 DVD specifications for DVD-ROM, DVD-video, and DVD-audio read only disks, parts 1 to 4.

On the blank disc is a thin film of magneto-optic material embedded within a protective layer, with all of the magnetic domains pointing north pole down (a digital zero). The magnetic field needed for reversal of polarity (to convert from zero to one) is very temperature dependent. At room temperature reversal requires a very strong magnetic field. However, at about 150°C only a small coercive force, provided by a dc magnetic bias field, is needed. During recording, a high-power laser beam, modulated by the digital data stream, heats microscopic spots on the rotating magneto-optic surface (within nanoseconds) to temperatures that allow the dc magnetic bias field to convert zeroes to ones. When the laser beam is off, the spots on the medium cool very rapidly, leaving the desired pattern of magnetic polarity. Erasure can be effected by repeating the procedure with the dc bias reversed. Playback uses a low-power laser beam, which, because of the Kerr magneto-optic effect, has its plane of polarization rotated one way or the other depending on the magnetic polarity of the recorded bit. An optoelectronic playback head senses the polarization and delivers the digital playback signal. The Sony magneto-optic MiniDisc is 6.4 cm (21/2 in.) in diameter, half that of a CD. To compensate for reduced recording area, a digital audio compression technique called adaptive transform acoustic coding (ATRAC) is used. The analog signal is digitized at 44.1 kHz sampling frequency with 16-bit quantization. Waveform segments of about 20 ms and 1000 samples are converted to frequency components that are analyzed for magnitude by the encoder and compressed. Threshold and masking effects are used as criteria for disregarding enough data for an overall reduction of about 5 to 1. During playback, the ATRAC decoder regenerates an analog signal by combining the frequency components recorded on the magneto-optic disc. An added feature of the compression circuit is storage of 3 s of playback time when potential interruptions could occur owing to system shock or vibration.

Digital Versatile Disc-Audio (DVD-A) DVD-Audio is a HiFi music format based on the same DVD technology as the DVD-Video discs and DVDROM computer discs, see Fig. 20.4.9. The disc structure is basically the same. Recorded with current CD recording methods (PCM), DVD-Audio has a theoretical sampling rate of 192 kHz with 24-bit processing. Like Super Audio CD (SACD) and normal DVD-Video and DVD-Data formats, DVD-Audio discs can store 4.7-GB with a choice of 2-channel and 6-channel audio tracks or a mix of both (see Table 20.4.3). Like SACD, information such as track names, artists’ biographies, and still images can be stored. The format is supported by DVD-Video players made after about November 2000. Manufacturers are making audio machines compatible with playing both types of disc. Titles are available in both Dolby digital mix (so they are compatible on all DVD-Video players) and specific DVD-Audio (requiring the separate player). Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.47

DIGITAL AUDIO RECORDING AND REPRODUCTION DIGITAL AUDIO RECORDING AND REPRODUCTION

20.47

TABLE 20.4.3 Specification of DVD-A Audio combination

2 channels 2 channels 6 channels 5 channels 2 channels & 5 channels

Playing time in minutes (single layer)

Configuration

48 kHz, 24 bits, 2 ch 192 kHz, 24 bits, 2 ch 96 kHz, 16 bits, 6 ch 96 kHz, 20 bits, 5 ch 96 kHz, 24 bits, 2 ch + 96 kHz, 24 bits, 3 ch & 48 kHz, 24 bits, 2 ch

Playing time in minutes (dual layer)

PCM

MLP

PCM

MLP

258 64 64 61 43 each

409 119 201 137 79 each

469 117 117 112 78 each

740 215 364 248 144 each

Note: MLP is an acronym for Meridian Lossless Packing, a lossless coding scheme (see Lossless Coding section).

Super Audio CD Super Audio CD is a new format. It uses direct stream digital (DSD) and a 4.7-GB disc with 2.8 MHz sampling frequency (i.e., 64 times the 44.1 kHz used in CD) enabling a very high quality audio format. Technical comparison between conventional CD and SACD is detailed in Table 20.4.4. The main idea of the hybrid disc format (see Fig. 20.4.10) is to combine both well-known technologies, CD and DVD, respectively, to keep compatibility with the CD players in the market, and to use the existing DVDvideo process tools to make a two-layer disc, i.e., to add a high-density layer to a CD reflective layer. As shown in Table 20.4.4, the storage capacity of the high-density layer is 6.9 times higher than the storage capacity of a conventional CD. Direct Stream Digital The solution came in the form of the DSD signal processing technique. Originally developed for the digital archiving of priceless analog master tapes, DSD is based on a 1-bit sigma-delta modulation together with a fifth-order noise-shaping filter and operates with a sampling frequency of 2.8224 MHz (i.e., 64 times the 44.1 kHz used in CD), resulting in an ultrahigh signal-to-noise ratio in the audio band. TABLE 20.4.4 Comparison Between Conventional CD and SACD Conventional compact disc Diameter Thickness Max. substrate thickness error Signal sides Signal layers Data capacity Reflective layer Semitransmissive layer Audio coding Standard CD audio Super Audio Multichannel Frequency response Dynamic range Playback time Enhanced capability

Super Audio CD

120 mm (4–3/4 in.) 1.2 mm (1/20 in.) +/–100 mm 1 1

120 mm (4–3/4 in.) 2 × 0.69 mm = 1.2 mm (1/20 in.) +/–30 mm 1 2: CD-density reflective layer + high-density semitransmissive layer

680 MB

680 MB 4.7 GB

–– 16-bit/44.1 kHz –– –– 5–20,000 Hz 96 dB across the audio bandwidth 74 min CD text

16-bit/44.1 kHz 1-bit DSD/2.8224 MHz 6 channels of DSD DC(0)–100,000 Hz (DSD) 120 dB across the audio bandwidth (DSD) 74 min Text, graphics, and video

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.48

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.48

AUDIO SYSTEMS

FIGURE 20.4.10 Hybrid disc content of the Super Audio CD (a), hybrid disc construction (b), and hybrid disc signal reading (c).

The Three Type of Super Audio CD The SACD standard, published by Philips and Sony in March 1999, defines three possible disc types (see Fig. 20.4.10). The first two types are discs containing only DSD data; the single layer disc can contain 4.7 GB of data, while the dual layer disc contains slightly less than 9 GB. The third version—the SACD Hybrid—combines Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.49

DIGITAL AUDIO RECORDING AND REPRODUCTION DIGITAL AUDIO RECORDING AND REPRODUCTION

20.49

a single 4.7 GB layer with a conventional CD that can be played back on standard CD players. (For more information see http://www.sacd.philips.com/) RDAT Rotary head digital audio tape (RDAT) is a semiprofessional recording format, an instrumentation recorder, and a computer data recorder.6 Mandatory specification are:

• • • • • •

2 channels (optional more) 48 or 44.1 kHz sampling rate 16 bits quantization 8.15 mm/s tape speed 2 h playing time (13 mm tape) The cassette has a standardized format of 73 × 54 × 10.5 mm, which is rather smaller than the compact cassette.

OTHER APPLICATIONS OF DIGITAL SIGNAL PROCESSING The main applications of audio DSP are high-quality audio coding and the digital generation and manipulation of music signals. They share common research topics including perceptual measurement techniques and knowledge and various analysis and synthesis methods.7 Chen8 gives a review of the history of research in audio and electroacoustics, including electroacoustic devices, noise control, echo cancellation, and psychoacoustics. Reverberation. For some years digital processing of audio signals has been used for special purposes (e.g., echo and reverberation effects) in systems that were otherwise analog in nature. The possibility was suggested by computer-generated “colorless” artificial reverberation experiments. When high-quality A/D and D/A conversion became economical, digital time-delay and reverberation units followed. Figure 20.4.11a is a block diagram of a digital audio reverberation system in which the complete musical impulse sound reaching a listener (Fig. 20.4.11b) consists of slightly delayed direct signal, followed by a group of simulated early reflections from a tapped digital delay line and a “reverberant tail” added to its envelope by a reverberation processor using multiple recursive structures to produce a high time density of simulated reflections. Dither is used to prevent perceptually annoying errors like quantizers. It is a random “noise” process added to a signal prior to its (re)quantization in order to control the statistical properties of the quantization error.9,10 A common stage to perform dithering is after the various digital signal processing stages just ahead of the quantization before storing the signal or sending it to a digital-to-analog converter (DAC). A special topic in signal processing for sound reproduction is overcoming the limitations of the reproduction set-up, e.g., reproduction of bass frequencies through small loudspeakers.11 Another limitation is the distance between the two loudspeakers of a stereophonic setup. If one likes to increase the apparent distance, frequency dependent cross talk between the channels can be applied.12 Lossless Coding. Lossless compression is a technique to recode digital data in such a way that the data occupy fewer bits than before. In the PC world these programs are widely used and known under various names such as PkZip. For digital audio these programs are not very well suited, since they are optimized for text data and programs. Figure 20.4.12 shows a block diagram representing the basic operations in most lossless compression algorithms involved in compressing a single audio channel.13 All of the techniques studied are based on the principle of first removing redundancy from the signal and then coding the resulting signal with an efficient coding scheme. First the data are divided into independent frames of equal time duration in the range of 13 to 26 ms, which results in a frame of 576 to 1152 samples if a sampling rate of 44.1 kHz is used. Then the bits in each frame are decorrelated by some prediction algorithm as shown in Fig. 20.4.13. The value of a sample x[n] is predicted using the preceding samples x[n – 1], x[n – 2], …, by using the filters A, B and quantizer Q. The error signal e(n) that remains after prediction is in general smaller than x, and will therefore require fewer bits for its exact digital representation. The coefficients of the filters A and B are transmitted as well, which makes an exact reconstruction of x[n] possible. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.50

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.50

AUDIO SYSTEMS

FIGURE 20.4.11 The basic operations in most lossless compression algorithms.

The third stage is an entropy coder, which removes further redundancy from the residual signal e[n], and again in this process no information is lost. Most coding schemes use one of these three algorithms:

• Huffman, run length, and Rice coding, see Ref. 13 for more details • Meridian Lossless Packing (MLP) for DVD-A • Direct Stream Transfer for SACD Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.51

DIGITAL AUDIO RECORDING AND REPRODUCTION DIGITAL AUDIO RECORDING AND REPRODUCTION

20.51

FIGURE 20.4.12 The basic operations in most lossless compression algorithms.

Watermarking. The advantages of digital processing and distribution of multimedia, such as noise-free transmission and the possibility of digital signal processing on these media, are obvious. The disadvantage, from the viewpoint of media producers and content providers, can be the possibility of unlimited coping of digital data without loss of quality. Digital copy protection is a way to overcome these problems. Another method is the embedding of digital watermarks into the multimedia.14 The watermark is an unremovable digital code, robustly and imperceptibly embedded in the host data and typically contains information about the origin, status, and/or destination of the data. While copyright protection is the most prominent application of watermarking techniques, other methods exist, including data authentication by means of fragile watermarks that are impaired or destroyed by manipulations, embedded transmission of value-added services, and embedded data labeling for other purposes than copyright protection such as monitoring and tracking.

Multimedia Content Analysis Multimedia content analysis refers to the computerized understanding of semantic meanings of multimedia documents such as a video sequence with an accompanying audio track. There are many features that can be used to characterize audio signals. Usually audio features are extracted in two levels: short-term frame level and longterm clip level, where a frame is about 10 to 40 ms. To reveal the semantic meaning of an audio signal, analysis over a much longer period is necessary, usually from 1 to 10 s.15 Special Effects. If a single variably delayed echo signal (τ > 40 ms) is added to direct signal at a low frequency (< 1 Hz), a sweeping comb filter sound effect is produced called flanging. When multiple channels of lesser delay (e.g., 10 to 25 ms) are used, a “chorus” effect is obtained from a single input voice or tone. Time-Scale Modification. Minor adjustment of the duration of prerecorded programs to fit available program time can be accomplished digitally by loading a random-access memory with a sampled digital input signal and then outputting the signal with waveform sections of the memory repeated or skipped as needed under

FIGURE 20.4.13 General structure for prediction.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.52

DIGITAL AUDIO RECORDING AND REPRODUCTION 20.52

AUDIO SYSTEMS

computer control, in order to approximate a continuous output signal of different duration. A related need, to change the fundamental (pitch) frequency of recorded speech or music without changing duration, involves the use of different input and output clock frequencies, along with repeating or skipping waveform segments as needed to retain constant duration. Other digital audio components that offer advantages, or indeed are essential, once the system goes digital, include filters, equalizers, level controllers, background-noise reducers, mixers, and editors. The compact disc, developed especially for audio uses, provides a multimegabyte storage technique, which is very attractive in many other applications for read-only memories. Conversely, in the evolution of telecommunication networks, new techniques for signal decomposition and reconstruction, and for echo cancellations, suggest further audio improvements in conference pickup and transmission, for example. The interchange between digital audio and other branches of digital communication continues.

MPEG Audio Coding General Moving Picture Experts Group (MPEG) is well known for its developments of a series of standards for the coding of audiovisual content [http://www.cselt.it/mpeg/]. Initially targeted at the storage of audiovisual content on compact disc media, the MPEG-1 standard was finalized in 1992 and included the first generic standard for lowbit-rate audio within the audio part. Then the MPEG-2 standard was completed and extended MPEG-1 technology toward the needs of digital video broadcast. On the audio side, these extensions enabled coder operation at lower sampling rates (for multimedia applications) and coding of multichannel audio. In 1997 the standard of an enhanced multichannel coding system (MPEG-2 Advanced Audio Coding, AAC) was defined. The so-called MP3 is the popular name for MPEG-1 Layer III. Then the MPEG-4 standard was developed, with new functionalities such as object-based representation, content-based interactivity, and scalability; the MPEG-4 standard was developed in several steps (called versions), adding extensions to the basic technology for audio. Reference 16 describes in some detail the key technologies and main features of MPEG-1 and MPEG-2 audio coders. In 1996 the effort behind MPEG-7 was started. MPEG-7 defines a universal standardized mechanism for exchanging descriptive data that are able to characterize many aspects of multimedia content with a worldwide interoperability,17 or as the official name says, a “multimedia content description interface.” Work on the new standard MPEG-21 “Multimedia Framework” was started in June 2000. The vision for MPEG-21 is to define a multimedia framework to enable transparent and augmented use of multimedia resources across a wide range of networks and devices used by different communities.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Gravereaux, D. W., A. J. Gust, and B. B. Bauer J. Audio Eng. Soc., 1970, Vol. 18, p. 530. Bernhard, R. IEEE Spectrum, December, 1979, p. 28. Rumsey, F., and J. Watkinson “The Digital Interface Handbook,” Focal Press, 1995. Bray, J., and C. F. Sturman “Bluetooth Connect Without Cables,” Prentice Hall PTR, 2001. Lokhoff, G. C. P. “dcc—Digital Compact Cassette,” IEEE Trans. Consumer Electron., 1991, Vol. 37, p. 702. Watkinson, J. “RDAT,” Focal Press, 1991. Kahrs, M., and K. Brandenburg, eds. “Applications of Digital Signal Processing to Audio and Acoustics,” Kluwer Academic Publishers, 1998. Chen, T., ed., “The Past, Present, and Future of Audio Signal Processing,” IEEE Signal Process. Magazine, September 1997, Vol. 14, No. 5, pp. 30–57. Wannamaker, R., S. Lipshitz, J. Vanderkooy, and J. N. Wright “A Theory of Nonsubstractive Dither,” IEEE Trans. Signal Process., Feb. 2000, Vol. 48, No. 2, pp. 499–516. Norsworthy, S. R., R. Schreier, and G. C. Temes, eds. “Delta-Sigma Data Converters: Theory, Design, and Simulation, IEEE Press, 1996. Larsen, E., and R. M. Aarts “Reproducing Low Pitched Signals Through Small Loudspeakers,” J. Audio Eng. Soc., March 2002, Vol. 50, No. 3, pp. 147–164.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_20.qxd

10/28/04

11:14 AM

Page 20.53

DIGITAL AUDIO RECORDING AND REPRODUCTION SOUND UNITS AND FORMATS

20.53

12. Aarts, R. M. “Phantom Sources Applied to Stereo-Base Widening,” J. Audio Eng. Soc., March 2000, Vol. 48, No. 3, pp. 181–189. 13. Hans, M., and R. W. Schafer, “Lossless Compression of Digital Audio,” IEEE Signal Process. Magazine, July 2001, Vol. 18, No. 4, pp. 21–32. 14. Special Issue on Identification and Protection of Multimedia Information, Proc. IEEE, 1999, Vol. 87, No. 7, pp. 1059–1276. 15. Wannamaker, R., S. Lipshitz, J. Vanderkooy, and J. N. Wright “A Theory of Nonsubstractive Dither,” IEEE Trans. Signal Process, February 2000, Vol. 48, No. 2, pp. 499–516. 16. Noll, P. “MPEG Digital Audio Coding,” IEEE Signal Process. Magazine, Sept. 1997, Vol. 14, No. 5, pp. 59–81. 17. Lindsay, A. T., and J. Herre “MPEG-7 and MPEG-7 Audio—An Overview,” J. Audio Eng. Soc., July/August 2001, Vol. 49, Nos. 7/8, pp. 589–594.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 21

VIDEO AND FACSIMILE SYSTEMS Although much of this section describes basic video and facsimile technologies that have not changed over the years, newer material is also included. For example, international agreement was reached recently on the use of 1920 × 1080 as a common image format for high-definition (HD) production and program exchange. The 1920 × 1080 format has its roots in the CCIR (Consultative Committee in International Radio) sampling standard and brings international compatibility to a new level. Set-top boxes and high-definition or digital-ready TV sets will be the mechanism that brings digital technology to the consumer for the next several years as the transition from analog to digital takes place. In the United States, three modulation techniques have become “standards” in a particular application: vestigial sideband (VSB) for terrestrial, quadrature amplitude modulation (QAM) for cable, and quaternary phaseshift keying (QPSK) for direct-to-home satellite. With Internet facsimile, store-and-forward facsimile occurs when the sending and receiving terminals are not in direct communication with one another. The transmission and reception takes place via the store-andforward mode on the Internet using Internet e-mail. In this mode, the facsimile protocol “stops” at the gateway to the Internet. It is reestablished at the gateway leaving the Internet. Real-time facsimile is covered by Recommendation T.38 approved by the International Telecommunications Union, Telecommunications (ITU-T) in 2002. R.J. In This Section: CHAPTER 21.1 TELEVISION FUNDAMENTALS AND STANDARDS INTRODUCTION PHOTOMETRY, COLORIMETRY, AND THE HUMAN VISUAL SYSTEM PICTURE SIZE, SCANNING PROCESS, AND RESOLUTION LIMITS ANALOG VIDEO SIGNALS THE AUDIO SIGNALS R.F. TRANSMISSION SIGNALS DIGITAL VIDEO SIGNALS COMPONENT DIGITAL VIDEO SIGNALS HIGH-DEFINITION TELEVISION (HDTV) BIBLIOGRAPHY

21.3 21.3 21.6 21.16 21.23 21.34 21.34 21.37 21.39 21.39 21.52

CHAPTER 21.2 VIDEO SIGNAL SYNCHRONIZING SYSTEMS TYPES OF SYNCHRONIZATION DEVICES MULTI-STANDARD SLAVE SYNCHRONIZING GENERATOR MULTI-STANDARD MASTER SYNCHRONIZING GENERATOR VIDEO FRAME SYNCHRONIZER MEMORY FRAME SYNCHRONIZERS WITH DIGITAL I/O BIBLIOGRAPHY

21.55 21.55 21.56 21.57 21.58 21.59 21.59 21.60

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.2

VIDEO AND FACSIMILE SYSTEMS

CHAPTER 21.3 DIGITAL VIDEO RECORDING SYSTEMS INTRODUCTION TO MAGNETIC RECORDING PROFESSIONAL DIGITAL VIDEO RECORDING D-VHS DV RECORDING SYSTEM CD-I AND VIDEO CD BRIEF DESCRIPTION OF MPEG VIDEO CODING STANDARD THE DVD SYSTEM NETWORKED AND COMPUTER-BASED RECORDING SYSTEMS REFERENCES FURTHER READING

21.61 21.61 21.67 21.68 21.70 21.85 21.92 21.94 21.101 21.106 21.107

CHAPTER 21.4 TELEVISION BROADCAST RECEIVERS GENERAL CONSIDERATIONS RECEIVERS FOR DIGITAL TELEVISION TRANSMISSIONS QAM DIGITAL MODULATION QPSK QUADRATURE-PHASE-SHIFT KEYING ORTHOGONAL FREQUENCY DIVISION MULTIPLEX VESTIGIAL SIDE-BAND MODULATION SOURCE DECODING DISPLAYS LARGE-SCREEN PROJECTION SYSTEMS REFERENCES

21.108 21.108 21.110 21.113 21.117 21.117 21.118 21.124 21.133 21.134 21.137

CHAPTER 21.5 FACSIMILE SYSTEMS INTRODUCTION GROUP 3 FACSIMILE STANDARDS RESOLUTION AND PICTURE ELEMENT (PEL) DENSITY PROTOCOL DIGITAL IMAGE COMPRESSION MODULATION AND DEMODULATION METHODS (MODEMS) INTERNET FACSIMILE COLOR FACSIMILE SECURE FACSIMILE REFERENCES

21.139 21.139 21.139 21.140 21.141 21.142 21.146 21.147 21.149 21.151 21.152

On the CD-ROM: The following is reproduced from the 4th edition of this handbook: “Television Cameras,” by Laurence J. Thorpe.

21.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 21.1

TELEVISION FUNDAMENTALS AND STANDARDS James J. Gibson, Glenn Reitmeier

INTRODUCTION This chapter summarizes analog and digital television signal standards and the principles on which they are based. The technical standards for color television developed in 1953 for the United States by the National Television System Committee (NTSC) are described on a few pages in the rules of the Federal Communications Commission (FCC Rule Part 73). The rules only specify the radiated signal in sufficient detail for a receiver manufacturer to produce receivers, which convert this signal into a television picture with sound. This traditional approach to formulating standards leaves implementation to competitive forces. Since 1953 many international standards and recommended practices have evolved. A similar philosophy was used in the FCC’s adoption of the Advanced Television System Committee (ATSC) digital television standards in 1996. All color television standards are based on the same principles:

• The psychophysics of the human visual system (HVS). • Picture-signal conversion by sampling/display, at field rate, of three primary colors in a flat rectangular dynamic picture on a raster of horizontal scan lines, scanned from left to right and top to bottom.

• The signals are conveyed as three components: one luminance signal, which essentially provides brightness information, and two chrominance signals which essentially provide hue and color saturation information.

• For radio frequency transmission these three signals and audio signals are multiplexed to form a single r.f. signal, which occupies a channel in the frequency spectrum. Some of these principles are illustrated in Fig. 21.1.1, which shows a block diagram of a standard analog television system for terrestrial broadcasting. The figure shows that the video and audio signals are multiplexed separately to form composite video and audio signals, which are subsequently delivered to separate picture and sound transmitters generating signals, which are diplexed to form a radiated signal that occupies a 6, 7, or 8 MHz band in the radio frequency spectrum from 40 to 900 MHz. This is the usual practice in broadcasting of analog television signals in the NTSC, PAL (Phase Alternation Line), and SECAM (Séquentiel à mémoire) systems. These systems, which are compatible with black and white reception, use frequency division multiplex: The chrominance signals are bandlimited and modulate one or two subcarriers that are “inconspicuously” added to (multiplexed with) the luminance signal. Besides analog television terrestrial broadcast standards, there are many standards for analog television production, storage, and distribution (terrestrial, satellite, and cable) developed by several organizations. In analog television there are two picture scanning standards specified as N/Fv where N = total number of scanning lines and Fv = number of picture fields per second. These standards are 625/50 and 525/60 (including 21.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.4

TELEVISION FUNDAMENTALS AND STANDARDS 21.4

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.1.1 Functional block diagram of a standard analog television broadcast system. Some systems may multiplex sound and picture before delivery to a transmitter.

525/59.94). There are three basic composite color video signal standards that carry all the color information in one signal: NTSC, PAL, and SECAM. In addition there are nine standards (with variations) describing the radiated signal carrying the composite color video signals with various types of r.f. modulation and bandwidths and with various standards for audio signals and audio signal modulation. These nine standards are referred to as B, G, H, I, K, K1, L, M, N. Only system M is of type 525/60. In digital television (DTV), the three component signals and audio are sampled, digitized, and data compressed to eliminate redundant and psychophysically irrelevant data. Digital signals use available spectrum more effectively than analog signals. Digital television standards have been developed with consideration for flexibility, extensibility (for future applications), and interoperability with other systems for information production and distribution (e.g., computers). In 1982 the Radio Consultative Committee of the International Telecommunications Union (ITU-R), formerly called the International Radio Consultative Committee (CCIR) adopted an international digital television component standard, ITU-R Recommendation 601. In this chapter this important standard is referred to by its popular old designation: CCIR601. This standard was primarily intended for production and for tape recording (SMPTE format D-1), but is now used in many applications, including DTV transmission. In digital television (DTV), including high-definition television (HDTV), digital techniques are used for video compression, data transport, multiplexing, and r.f. transmission. DTV promises to be more than television, in the sense that it delivers to homes a digital channel with high data rate which may carry more than high quality pictures and sound. Of particular importance are the standards developed by the Moving Picture Expert

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.5

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.5

FIGURE 21.1.2 Functional block diagram of a standard television broadcast system. Different modulation techniques and signal bandwidths are used for different transmission media.

Group (MPEG) of ITU. MPEG standards are quite flexible, but include specific approaches to television data compression and packetization. MPEG standards have been adopted by the International Standards Organization (ISO) and the International Electro-technical Committee (IEC). Based on the MPEG-2 format, the FCC Advisory Committee on Advanced Television Systems (ACATS) provided oversight for the development

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.6

TELEVISION FUNDAMENTALS AND STANDARDS 21.6

VIDEO AND FACSIMILE SYSTEMS

of a “Grand Alliance,” a system for high-definition digital television (HDTV) for North America. The Grand Alliance itself was comprised of AT&T, General Instrument, Massachusetts Institute of Technology, Philips, Sarnoff, Thomson, and Zenith, which joined forces to forge a single best-of-the-best system from four competing system proposals. The Grand Alliance system is the basis of the ATSC digital television standard and the FCC digital broadcast standards for the United States adopted in 1996. Figure 21.1.2 shows basic functional blocks of the “Grand Alliance” HDTV system. The basic principles listed above for analog systems still apply to this digital system. The video and audio components are basically the same as in the analog system shown in Fig. 21.1.1, but source coding (removal of irrelevant and redundant data), ancillary signals, multiplexing, transport, packetization, channel coding (for error management), and modems are entirely different. In both analog and digital systems, the relation between the picture, as observed by a viewer, and the signals is not a simple one. Signal quality measures are not easily related to the subjective quality of pictures. In fact, there are very few objective measures of picture quality, while there are many objective measures of signal quality and signal tolerances. The selection of television standards, including recent HDTV standards, are based on subjective picture quality evaluations. Relations between signal quality and picture quality in NTSC, PAL, and SECAM are fairly well known. Numerous measurement and monitoring techniques using test signals have been developed for these signals. Relations between objective measures of signal quality and picture and sound quality are not as well correlated in digital television and HDTV. Ongoing work on digital TV standardization is carried out by numerous committees. ATSC continues to develop its terrestrial transmission standard. The Society of Cable Telecommunications Engineers (SCTE) develops standards for cable transmission. The Digital Video Broadcast (DVB) group also continues to develop its standards for terrestrial (DVB-T), cable (DVB-C), and satellite (DVB-S) transmission. The Society of Motion Picture and Television Engineers (SMPTE) is involved in standards for related professional production equipment. The Consumer Electronics Association (CEA) establishes industry standards for consumer equipment.

PHOTOMETRY, COLORIMETRY, AND THE HUMAN VISUAL SYSTEM Radiance and Luminance The HVS is sensitive to radiation over a 2:1 bandwidth of wavelengths extending from 380 to 760 nm, i.e., from extreme blue to extreme red. When adapted to daylight it is most sensitive to green light at 555 nm. As the wavelength of monochromatic light departs from 555 nm, the radiance (radiation from a surface element in a given direction defined by the angle Θ from the normal and measured in watts/steradian per unit projected area = the actual surface of the radiating element times cosΘ) must be increased for constant perception of brightness. The International Commission on Illumination (CIE) has standardized the response versus wavelength of the HVS y (l ) of a standard observer adapted to daylight vision (photopic vision). Figure 21.1.3 shows y (l ) versus l. Luminance, sometimes referred to as brightness and measured in cd/m2, Candelas per projected area in m2, is defined as Y = 680 ∫

∞ 0

E(λ ) y (λ )dλ

cd/m 2

E(λ ) = spectral density of radiance in (W/nm)/m 2

(1)

where l is the wavelength in nanometers. Older units for luminance are: a footlambert (ft.L) = 3.42626 cd/m2 and a millilambert (mL) = 3.18310 cd/m2. A surface is diffuse at a wavelength l if E(l) is independent of the direction Θ. The face plate of a TV tube is essentially diffuse for all wavelengths of visible light, but this may not be the case for a projection screen. Thus the luminance of the face-plate of a picture tube is roughly independent of the direction from where it is seen. Typically the peak luminance levels of picture tubes is about 120 cd/m2, but bright displays may have luminance levels exceeding 250 cd/m2. The luminance of bright outdoor scenes may well exceed 10,000 cd/m2,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.7

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

FIGURE 21.1.3 Tristimulus values of the CIE nonphysical primaries, y (l ) , and z (l). Note that y (l) is the CIE standard luminosity function.

21.7

x (l),

while a motion picture screen may be 30 cd/m2. Equal Energy Radiation, E(l) = constant, appears to have the color “white.” Corresponding radiometric and photometric units are: Radiant flux Irradiance Radiant intensity Radiance

Watt Watt/m2 Watt/steradian (Watt/steradian)/m2

Luminous flux Illuminance Luminous intensity Luminance

Lumens Lux = lumens/m2 Candela = lumens/steradian Candela/m2

At l = 555 nm there are 680 lm/w. Without a color filter, the illuminance I of a target in a camera or of the retina of an eye is proportional to the luminance Y of the scene (theoretically I  pY/4(f-number of the lens)2. Contrast and Resolution The HVS is sensitive to relative changes in luminance levels. The just noticeable difference (JND) between the luminance Y + ∆Y of a small area and the luminance Y of a surrounding large area can be expressed by the ratio F = ∆Y/Y

(2)

where F is the Fechner ratio. The Fechner ratio is remarkably constant over the large range of the high luminance levels used in TV displays. It ranges from 1 percent for Y > 10 cd/m2 up to 2 percent at Y = 1 cd/m2. Assuming a constant Fechner ratio, the number of distinguishable small area gray levels between a “black” level Yb and a highlight “white” level Yw is n  (ln Yw /Yb)/F

or

Yw /Yb  exp(nF)  (1 + F)n = contrast

(3)

For example, for n = 255 levels (8-bit quantization of log luminance) and F = 1.5 percent, the contrast is 45. With 9 bits (511 levels @ 1 percent) the contrast is 165. Contrast rarely exceeds 50:1 in TV displays owing to ambient light and light scattered within the picture tube. Thus, due to the contrast available in consumer displays, DTV systems use 8-bit luminance (and chrominance) signals.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.8

TELEVISION FUNDAMENTALS AND STANDARDS 21.8

VIDEO AND FACSIMILE SYSTEMS

More relevant test patterns for measuring the contrast sensitivity of the HVS are luminance sinewave gratings added to a uniform background luminance level. The contrast ratio of a sinewave grating, also referred to as modulation, is defined as: m = (Ymaz – Ymin)/(Ymax + Ymin) = (luminance amplitude/average luminance)

(4)

The just perceptible modulation (JPM) of a stationary sinewave grating is sensitively dependent on a large number of factors: the spatial frequency v of the grating expressed in cycles per degree (cpd), the average luminance level, the orientation of the wavefront, the adaptation of the pupil of the eye, and ambient factors. If the line from the eye to the observed point is perpendicular to the picture, v cycles per degree can be converted into k cycles per picture height (cph) by multiplication with 57.3/D, where D is the viewing distance in picture height. The most sensitive m is 0.3 percent at about 3.5 cpd = 200/D cph and increases to 100 percent at about 35 cpd = 2000/D cph (500 cph @ D = 4). This upper limit can be taken as an estimate of the resolution limit of the HVS. TV systems have been designed with HVS characteristics in mind. Analog and standard definition (SD) digital systems assume D = 6; therefore, the HVS resolution limit is approximately 383 cph. HDTV systems assume a larger display or closer viewing, D = 2; therefore, the HVS resolution limit is approximately 1000 cph. The ratio of the perceived to the displayed modulation is the modulation transfer function (MTF) of the HVS. It rolls off with spatial frequency depending on a number of parameters including the adaptation of the pupil of the eye. It is down about 6 dB at 750/D cph for a luminance of 140 cd/m2. One performance measure of a TV system is its MTF = ratio of displayed m(k) to input m(k) versus spatial frequency for various orientations of the wavefront of the grating. The eye has high resolution only within a sustained angle of about 2°. For D = 6 this is 2.5 percent of a TV display area. The remaining area is viewed with low resolution rod vision which is sensitive to flicker. Gamma “Pre-Correction” Since the HVS is sensitive to contrast, it can tolerate more noise and larger luminance errors in bright areas than in dark areas. Since noise and errors are added to the luminance signal during transmission, the visibility of the added noise is reduced if the luminance signal is compressed at the source and expanded in the receiver. A constant Fechner ratio suggests logarithmic compression and exponential expansion of the luminance, a proposal that was recommended by the NTSC in 1941 and is still allowed in the FCC rules for black and white TV. In the early days of black and white TV, it was found, however, that the picture tube itself acted as a good and inexpensive expander. The displayed luminance Y on the face plate of a picture tube in response to an applied signal voltage V is approximately equal to: Y = const. × V g

(5)

where g is gamma ranges from 2 to 3. Assuming that the output signal Y from a TV camera is proportional to the illuminance of the target (a fair assumption for modern cameras), the luminance signal Y delivered by such a linear camera is compressed right at the output of the camera to yield a gamma “corrected” luminance signal, which, in black and white TV, is approximately proportional to Y1/g. In DTV systems, the same practice continues to be employed, both for reducing the visibility of compression-related artifacts as well as maintaining compatibility with legacy analog systems, which is an economic consideration in dual digital/analog receivers. Table 21.1.1 shows standards for gamma and modifications of Eq. (5) in recent standards. Flicker The human visual system is quite sensitive to large area luminance fluctuations at frequencies below 100 Hz (flicker). The critical flicker frequency ff Hz (threshold of perception) is determined by on-off modulation of the luminance of a large area. The critical frequency increases, according to the “Ferry-Porter law,” in proportion to the logarithm of the highlight Y cd/m2 of the luminance: ff = 30 + 12.5 log10(Y cd/m2)

Hz

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(6)

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.9

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.9

TABLE 21.1.1 Standards for Electro-Optical Transfer Characteristic: V = F(L), L = F–1(V) Example: R¢ = F(R = R) in “gamma corrector,” R = F–1(R¢ ) in display SYSTEM

g (gamma)

VO(bias)

k(slope)

L*

V*

NTSC PAL/SECAM TU R-709, SMPTE170M, SMPTE274M SMPTE 240M

2.2 2.8 1/0.45

0 0 0.099

0 0 4.5

0 0 0.018

0 0 0.081

1/0.45

0.1115

4

0.0228

0.0912

V = normalized electric signal level and L = normalized optical stimulus level (at Maximum level V = L = 1 by definition): V = F(L) : V = kL for 0 ≤ L < L* and for L* ≤ L ≤ l V = (1 + Vo)L1/g − V L = F1(V) : L = V/k for 0 ≤ V < V * = kL* and for V * ≤ V ≤ l L = [(V + Vo)/(1 + Vo)]g

The result varies with individuals and with the adaptation of the eye. It shows that ff increases by about 12.5 Hz when Y increases by a factor of 10. For example, ff is 48, 50, and 60 Hz for a peak luminance Y = 25.4, 36.6, and 227.6 cd/m2. The HVS is more sensitive to flicker for light entering the eye at an angle (rod vision). In motion pictures the picture frame rate is 24 pictures per second. To avoid flicker at acceptable luminance levels, every picture is shown at least twice, when the film is projected (this is referred to as double-shuttering). In television the frame rate is 25 Hz in most of the world and 30 Hz in the United States, Japan, and some other countries. In all analog TV systems, large area flicker rates are doubled by using two fields with interlaced scan lines to complete a frame. DTV systems decouple the display frame rate and the transmitted frame rate. DTV systems generally support the legacy analog frame rates and interlaced scanning formats, and additionally provide new capabilities for progressive scan formats and film-related frame rates.

Color Space, Color Primaries, and CIE Chromaticities It is a fact that a wide range of colors can be reproduced with only three sources of light, e.g., red, green, and blue. The basic laws of colorimetry, the science devoted to color vision are:

• The HVS can only perceive three attributes of color: brightness, hue, and saturation, more precisely defined as luminance, dominant wavelength, and purity.

• Colors can be represented as vectors in a three-dimensional linear vector space referred to as color space. Colors add and subtract as vectors, but a distance in color space is not a measure of perceived difference in colors. In color space the luminance component Y ≥ 0 traditionally points up from a horizontal chrominance plane Y = 0. Figure 21.1.4 shows a color vector A = YW + C as the sum of a white vector YW with the same luminance (brightness) Y as A and a chrominance vector C in a constant luminance plane with an orientation related to hue (tint) and a magnitude related to saturation (color). Based on experiments with hundreds of observers matching colors composed of monochromatic light of different wavelengths, the CIE standardized X- and Z-axes at right angles in the chrominance plane, such that all color vectors have nonnegative components X, Y, Z, and that X = Y = Z for equal energy light. Thus, basisvectors X, Y, Z with unit length along the XYZ-axes are artificial or nonphysical primary vectors. The X, Y, Z components of monochromatic colors all having the same radiance are x(λ ) , y(λ ) , and z (l) (Fig. 21.1.3). Given a color with a radiance having a spectral power density E(l) (radiance per nm), the relative X, Y, Z components are ∞

X = ∫ E(λ ) x (λ )dλ 0



Y = ∫ E ( λ ) y ( λ ) dλ 0



Z = ∫ E(λ ) z (λ )dλ 0

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(7)

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.10

TELEVISION FUNDAMENTALS AND STANDARDS 21.10

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.1.4 Representation of a color as a vector A with three attributes luminance (brightness), hue (tint), and saturation. A has a vertical luminance component Y. A is the sum of a white vector YW, which has the same luminance as A, and a chrominance vector C, which is in the constant luminance plane.

Since X = Y = Z for equal energy white (E = const.), the areas under the curves x(λ ), y(λ ), and z (l) are the same. Clearly lights with different spectral content E(l) can have the same color. The direction of a color vector is specified by the point [x, y, z] where the vector or its extension penetrates the unit plane X + Y + Z = 1. [x, y, z] = [X, Y, Z]/(X + Y + Z)

(8)

thus x + y + z = 1 and Y/y = X/x = Z/z = X + Y + Z = “gain.” The coordinates x, y, z are referred to as the CIE chromaticities of the color. Clearly z = 1 – x – y is redundant. It is common and practical to specify a color vector A by chromaticities x and y and luminance Y as A = [X, Y, Z] = (Y/y)[x,y, 1 − x − y)]

(9)

Figure 21.1.5 shows the CIE color space and the CIE chromaticity plane penetrated by color vectors or their extensions. Figure 21.1.6 shows the horseshoe-shaped locus of the xy-chromaticities of monochromatic light [ x(l ), y(l )] = [ x(l ), y(l )] / ( x(l ) + y(l ) + z(l )]. All realizable chromaticities are within the horseshoe. The magenta colors along the straight line from extreme blue to extreme red are not monochromatic, but are obtained as red-blue mixtures. Areas in the CIE chromaticity chart of just noticeable differences in chromaticities (JNDs) are shaped like ellipses which are larger in the green area than in the red area, which in turn are larger than in the blue area. The number of chromaticities (color vector directions) required for perfect color reproduction has been estimated to range from 8000 (13 bits) to 130,000 (17 bits). The number of chromaticity JNDs within the color gamuts of current display devices has been estimated to be about 5000. In television standards all color vectors are defined in CIE color space. The CIE components, X, Y, Z are not practical. Television standards specify realizable red, green, and blue primary colors R, G, B, which are not in the same plane. In terms of these primaries a color is represented as a vector A = [X, Y, Z] = RR + GG + BB = YW + C

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(10)

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.11

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.11

FIGURE 21.1.5 CIE color space with CIE chromaticity plane X + Y + Z = 1. The CIE chromaticities of a color vector are the coordinates x, y, and z = 1 − x − y, where the vector or its extension penetrates the CIE chromaticity plane. These points are shown for a set of R, G, B basis vectors and the associated white vector W, which by definition has a luminance of unity. X, Y, Z are unit vectors along the X, Y, Z axes.

where R, G, B are tristimulus values = quantity of each primary color in the mixture. The luminance Y of A multiplies a “reference white vector” W = R + G + B, which, by definition, has a luminance component normalized to unity (Yw = 1). Thus chrominance vector C is in a constant luminance plane (Yc = 0). The luminance components of R, G, and B are Yr, Yg, and Yb, respectively. Consequently: Y = YrR + YgG + YbB = the luminance component of A

(11a)

1 = Yr + Yg + Yb = normalized luminance component of the white vector W

(11b)

W = R + G + B = white vector with luminance component Yw = 1

(11c)

C = M(R − Y) + N(B − Y) = chrominance vector with luminance component Yc = 0

(12a)

M = R − (Yr /Yg)G = the R − Y basis vector in the chrominance plane

(12b)

N = B − (Yb/Yg)G = the B − Y basis vector in the chrominance plane

(12c)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.12

TELEVISION FUNDAMENTALS AND STANDARDS 21.12

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.1.6 The xy-chromaticity diagram of the CIE system. Also shown are television standard rgb chromaticities and white illuminants. *The EBU x-coordinate for green is 0.29.

A white vector implies C = 0 and R = G = B = Y. Equation (12) is derived from Eqs. (10) and (11). Figure 21.1.5 shows vectors R, G, B, and W. Figure 21.1.7 shows the vectors M and N. Given the X, Y, Z components of the primaries R, G, B the R, G, B tristimuli can be related to the CIE tristimuli X, Y, Z by a matrix P as: X R Xr Y = P G = Yr Z B X r

Xg Yg Zg

Xb R Yb G Z B

(13)

b

RGB components of two different basis systems (e.g., NTSC and PAL) specified by matrices P1 and P2 are related by a matrix P1P2–1 where P –1 is the inverse of P. In television standards the R, G, B primaries

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.13

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.13

FIGURE 21.1.7 The rgb chromaticity plane goes through the tips of the basis vectors R, G, and B. It intersects the chrominance plane along the alychne. Also shown are the CIE basis vectors X, Y, and Z and the basis vectors used in television Y, M, and N. The chrominance values defined by the chrominance basis vectors M and N are the “color difference” values M = R − Y and N = B − Y.

are specified by their CIE chromaticities and by the chromaticity of W (illuminant). The luminance values of the primaries Yr, Yg, Yb can be derived from these eight standardized chromaticities by inverting the matrix P for the white vector R = G = B = Yw = 1 and noting that X /Y = x/y and Z/Y = z/y and that Yr + Yg + Yb = 1: xr / yr

xg / yg

x b / y b Yr

Yw = 1 = 1 zw/ yw zr / yr Zw

1 z g / yg

1

Xw

xw/ yw

Yg

(14)

z b / y b Yb

Given the Y values of the primaries, the X and Z values can be determined by Eq. (9). Table 21.1.2 lists the xy-chromaticities and the luminance values of the primaries in standard TV systems. Figure 21.1.6 shows the NTSC, ITU-R709 (SMPTE274M), EBU, and the SMPTE170M (SMPTE240M) chromaticities in the CIE diagram. Television programs are produced with the assumption that display devices have (phosphors with) these chromaticities. The chromaticities of the primaries of a display device can be marked as corners of a triangle in Fig. 21.1.6. Since the tristimuli R, G, B are nonnegative (see Figs. 21.1.5 and 21.1.7, only colors with chromaticities within the triangle can be displayed.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.14

TELEVISION FUNDAMENTALS AND STANDARDS 21.14

VIDEO AND FACSIMILE SYSTEMS

TABLE 21.1.2 Chromaticities and Luminance Components in Standard Television Systems. White Illuminant (last row) is C in NTSC, Otherwise D65. Note that ITU-R709 is Almost the Same as EBU, and That NTSC Accommodates More Green Colors ITU-R709 & SMPTE274 EBU* (PAL* & SECAM*)

NTSC (FCC) System M x

y

Y

x

y

Red

0.67

0.33

0.299

0.64

0.33

Green

0.21

0.71

0.587

0.60

Blue

0.14

0.08

0.114

0.30 0.29* 0.15

0.06

White

0.3101

0.3162

1

0.3127

0.3291

SMPTE 170M & 240M

Y

x

y

Y

0.2125 0.222* 0.7154 0.707* 0.0721 0.071* 1

0.630

0.340

0.212

0.310

0.595

0.701

0.155

0.070

0.087

0.3127

0.3291

1

In television the key components are R − Y, Y, and B − Y of the M, W, N primaries. They are related to the tristimuli R, G, B by the matrix H: 1 − Yr R − Y R Y = H G = Yr B− Y B − Yr

− Yg Yg Yg −Y

− Yb

R Yb G , 1 − Yb B

R 1 1 0 R−Y G = − Yr + Yg 1 − Yb − Yg Y B B −Y 0 1 1

(15)

Two colors are said to be complementary if some weighted mixture of them results in a white color WW. Saturated complementary colors to the primaries R, G, B are Cyan CY = G + B, Magenta MA = B + R and Yellow YE = R + G. If a primary has components M, Y, N, the saturated complementary color has components −M, 1 − Y, −N. The RGB and MYN values of these basic colors in NTSC, ITU-R709 & SMPTE274M, and SMPTE240M are listed in Table 21.1.3. The EBU colors in PAL & SECAM differ only slightly (third decimal) from ITU-R709. These saturated colors can be seen on TV as color bars.

The rgb-Chromaticities and the Color Triangle Figure 21.1.7 shows a plane connecting the tips of the primary vectors R, G, B. This plane is referred to as the rgb-chromaticity plane, not to be confused with the CIE xyz-chromaticity plane. A color vector A = RR + GG + BB = (R + G + B)(rR + gG + bB) has rgb-chromaticities = [r,g,b] = [R,G,B]/(R + G + B). Clearly g = 1 − r − b is redundant. An rgb-chromaticity vector is conveniently represented as G + r(R − G) + b(B − G).

TABLE 21.1.3

Values of Y, M = R − Y and N = B − Y for Saturated Colors NTSC

ITU-R709 & SMPTE 274M

SMPTE 240M & 170M

COLOR

RGB

Y

R −Y

B −Y

Y

R −Y

B −Y

Y

R −Y

B −Y

White Yellow Cyan Green Megenta Red Blue

111 110 011 010 101 100 001

1 0.886 0.701 0.587 0.413 0.299 0.114

0 +0.114 −0.701 −0.587 +0.587 +0.701 −0.114

0 −0.886 +0.299 −0.587 −0.587 –0.299 +0.886

1 0.928 0.787 0.715 0.285 0.213 0.072

0 +0.072 −0.787 −0.715 +0.715 +0.787 –0.072

0 −0.928 +0.213 −0.715 +0.715 −0.213 +0.928

1 0.913 0.788 0.701 0.299 0.212 0.087

0 +0.087 –0.788 –0.701 +0.701 +0.788 −0.087

0 −0.913 +0.212 −0.701 +0.701 −0.212 +0.913

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.15

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.15

The rgb-chromaticity of the white vector W is r = g = b = 1/3. Figure 21.1.7 also shows that the GR and GB planes intersect the chrominance plane along the M = R − Y and N = B − Y axes. The xyz chromaticities are nonlinearly related to the rgb chromaticities. The relations are easily determined with Eq. 13, given [R,G,B] = [r,g,b] or [X,Y,Z] = [x,y,z]. The figure also shows a line, the alychne (no light), which is the intersection of the rgb-chromaticity plane and the chrominance plane Y = 0. The tips of the R, G, and B primary vectors are the corners of the color triangle. The rgb-chromaticities of the saturated complementary colors, cyan, magenta, and yellow, are midpoints on the sides of the triangle. The color triangle can be mapped to take more convenient shapes, for example, in a right angle or coordinate system. Figure 21.1.6 shows color triangles for some television standards first polar-projected onto the CIE chromaticity plane and subsequently parallel projected onto CIE xy-chromaticity diagram. Display devices using the standard chromaticities can only display colors with chromaticities inside the color triangle (nonnegative R, G, B tristimuli).

Relation Between Chromaticities and Chrominance-to-Luminance Ratios Since R − Y, Y, B − Y are related to signals in TV transmission, it is of interest to relate the ratios R/Y and B/Y to the xyz (CIE) chromaticities. In the CIE chromaticity diagram (Fig. 21.1.6) all lines R/Y = constant go through the intersection of the projection of the green-blue side (R/Y = 0) of the color triangle with the x-axis and all B/Y = constant lines go through the intersection of the projection of the green-red side (B/Y = 0) of the color triangle with the x-axis. Linear scales of R/Y and B/Y can be made on any line parallel with the x-axis. The scales are easily calibrated since lines R/Y = 1 and B/Y = 1 for lines going through the illuminant white point. The intersection between the pivoting R/Y and B/Y lines determines the chromaticity. Errors in the (R − Y)/Y and (B − Y)/Y ratios cause errors in the slope of these pivoting lines and may require the display device to produce negative light if they intersect outside the color triangle. Chroma noise and incorrect chroma gain (“color” on TV sets) may cause such errors. Errors in the chrominance cause large chromaticity errors when Y is small, which it is in the blue-magenta-red areas, especially in dark areas.

Implications of Chromaticity Standards Television standards specify the chromaticities that are expected to be used in the display devices at the receivers. Receiver manufacturers do not have to meet these standards, and in fact, in the United States, more efficient phosphors are used, which do not have the standard FCC chromaticities. The SMPTE has recommended other chromaticities to be used at the source which better match the more efficient phosphors now used in most receivers. Various chromaticity standards are shown in Table 21.1.2. SMPTE 170M is proposed as NTSC broadcast studio standards and ITU-R709 (equal to SMPTE 274M) and 240M for DTV standard. White illuminant D65 is becoming universal. Figure 21.1.6 shows that the “old” NTSC gamut of colors (triangle) is substantially larger in the green-cyan area than the gamuts of the newer chromaticities. However, the gamuts of the new phosphors cover almost the gamut of paints, pigments, and dies, and there are few complaints about the green color. Future displays may provide larger gamuts and force a change in de facto standards. The implications of chromaticity standards or of de facto standards are that in production of television programs it is expected that most displays in the receivers have luminous phosphors and a reference white illuminant with chromaticities close to the standards. Cameras usually have sensors with nonstandard chromaticities. The output signals from a camera can be matrixed to conform with a large gamut of chromaticity standards. A cameraman can adjust this matrix and process the output signals (linearly as well as nonlinearly) to produce an artistically desirable picture on the faceplate of displays with standard or de facto standard primaries. The chromaticities listed in television standards are not enforceable, but are taken as guidelines for designing receivers and displays.

Progress in Psychophysics of the Human Visual and Aural Systems Current television standards are based on an understanding of psychophysics which dates back a number of decades. Although current standard analog television systems are clever, economical, and robust, it is clear that

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.16

TELEVISION FUNDAMENTALS AND STANDARDS 21.16

VIDEO AND FACSIMILE SYSTEMS

they make inefficient use of channel capacity. In a 6-MHz transmission channel, over 90 percent of the energy of the visual information occupies a bandwidth of a few hundred kHz. With the advent of digital television, methods for effective reduction of irrelevancy and redundancy in communication of moving pictures and audio benefit from a new technology basis in digital compression.

PICTURE SIZE, SCANNING PROCESS, AND RESOLUTION LIMITS Picture Size and Scanning Raster In analog TV systems, the transmitted signal was designed to be directly related to the image representation of the camera and the display. The white rectangular area in Fig. 21.1.8 shows the active area of the picture, i.e., the area containing pictorial information. The active area is assumed to be flat. The height of the active area is the length unit ph (picture height) and the width A ph of the active area is the aspect ratio. In current standard television systems the aspect ratio is 4/3 and in HDTV it is 16/9. The viewing distance D is commonly assumed to be 6 to 8 ph in standard TV and 3 to 4 ph in HDTV. Also shown in Fig. 21.1.8 is an imaginary total picture area that includes inactive areas representing durations used in the scanning process for synchronization and retrace blanking. These durations are referred to as the vertical blanking interval VBI and the horizontal blanking interval HBI. The total vertical height including the VBI is 1/hV and the total horizontal width including the HBI is A/hH where hv and hH are vertical and horizontal scanning efficiencies. They are shown in Table 21.1.4 for analog TV systems and their standard definition DTV equivalents. In all television systems the total picture is scanned along horizontal lines from left to right while the scan lines progress downward from the top of the picture. The duration of a horizontal scan line including the duration of the HBI is H = 1/FH, where FH = line rate in Hz. The duration of a field including the VBI is V = 1/Fv, where Fv = field rate in Hz. In television standards a scanning system is specified as N/Fv where hv N = N0 = number of nonoverlapping lines displayed in the visible picture area (active lines). In progressive scan (“proscan” or “1:1 scan”) displayed lines in successive fields are co-located (overlap) and consequently the total number of scan lines is N = FH /FV = V/H. In 2:1 interlaced scan, displayed lines in even numbered fields are interlaced with the lines in odd-numbered fields and consequently N = 2FH/FV = 2V/H. When referring to “interlaced scan” it is generally assumed to be 2:1 interlaced. Multiple interlace without storage causes visible

FIGURE 21.1.8 Active picture area and the total area that includes the horizontal and vertical blanking intervals. hv and hH are scanning efficiencies.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.17

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.17

TABLE 21.1.4 Scanning Parameters and Scanning Efficiencies in Interlaced Analog TV Systems and Their Digital Equivalents Analog

A N FV Hz FH Hz hv hH No = Nhv Vv = Fv /hv ph/s VH = FHA/hH ph/s VT = Fv /No ph/s

525/60 4/3 525 60/1.001 15,750/1.001 0.92 0.8285 483 65.152 25,321 0.1241

Digital CCIR601 625/50 4/3 625 50 15,625 0.92 0.8125 575 54.348 25,641 0.0870

525/60 4/3 525 60/1.001 15,750/1.001 480/525 720/858 480 65.559 25,000 0.1249

625/50 4/3 625 50 15,625 576/625 720/864 576 54.253 25,000 0.0868

line-crawl. In interlaced scan one frame comprises an odd-numbered field followed by an even-numbered field. The frame rate is FV/2 Hz. In proscan a frame and a field is the same thing. Automatic interlace can be obtained if N = number of lines per frame is odd, as shown in Fig. 21.1.9. Interlaced scan can be achieved with an even number of lines per frame and proscan can be achieved with an odd number of half lines per field. The PAL/SECAM systems are 625/50(2:1) systems and the NTSC system is a 525/59.94(2:1) system or simply as a 525/60(2:1) system (although the field rate is exactly 60/1.001 Hz). Figure 21.1.9 shows the scanning raster for the interlaced 525/60 system. The scanning raster shown in Fig. 21.1.9 covers the total area including the blanking intervals. Interlaced scan has been used in all analog television systems to achieve good vertical resolution and little large area flicker, given available transmission bandwidth.

FIGURE 21.1.9 Interlaced scanning raster covering the total picture (including blanking intervals) for a 525/60 system (NTSC). Retrace during horizontal blanking is not shown.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.18

TELEVISION FUNDAMENTALS AND STANDARDS 21.18

VIDEO AND FACSIMILE SYSTEMS

Table 21.1.4 shows scanning parameters for various raster formats including the number of visible scan lines No = ηvN, the vertical scanning velocity vV = FV/ηv ph/s, and the horizontal scanning velocity vH = FHA/ηH ph/s in interlaced scan (vH must be doubled for proscan). Velocities of moving objects in the picture are conveniently expressed relative to the reference velocity vT = FV/No ph/s (= one line-spacing per field). In digitized systems ηH = ratio of active samples per line to total samples per line. The Video Signal Spectrum. The dynamic image on the target of a TV camera is a three-dimensional message: two spatial and one temporal. In the scanning process this message is converted into a one-dimensional signal. Consider first a stationary image having a property, e.g., a response to exposure, which can be specified as a function g(x,y) defined over the total picture area shown in Fig. 21.1.8. This is a rectangular area of width A/ηH = vH/FH and a height 1/ηV = vV/FV. The scanning beam can be represented as progressing with time along a straight line x = vHt and y = vvt over the periodically repeated image as shown in Fig. 21.1.10. The figure shows an interlaced scan with five lines per frame. A two-dimensional Fourier series component of the periodically repeated function g(x,y) is sampled by the scanning beam to yield a one-dimensional signal: Cmncos2π[(mFH/vH)x + (nFV/vV)y + cmn] = Cmncos2π[(mFH + nFV)t + cmn]

(16)

where n = 0, ±1, ±2, when m = 1,2,3,…, and n = 0,1,2,3,…, when m = 0. A Fourier component in the picture is a sinewave grating with a constant phase wavefront perpendicular to the spatial frequency vector kmn = [(mFH/vH), (nFV/vV)] = cycles per picture height (cph). In the scanning process this grating generates a spectral component in the signal at the discrete frequency mFH + nFV Hz. A grating that cannot be expressed with m and n as integers has “kinks” at the borders of the total pictures shown in Fig. 21.1.10 and consists of several gratings of the type shown in Eq. (16). Figure 21.1.11 shows a part of the spectrum of an interlaced system with N = 9 lines. Spectral components for n > 0 interleave with components for n < 0. The figure shows that a frequency component can be generated in more than one way (aliasing) if |n| > N/2. Thus, as expected, the highest vertical spatial frequency that can be conveyed without aliasing is No/2 cph. Aliasing shows in certain pictures: venetian blinds, striped shirts, resolution test-patterns. If the horizontal scan frequency is doubled to 2FH (progressive scan with N lines per field), the spectral components shown as dotted lines in Fig. 21.1.11 disappear. The components are spaced Fv in proscan and Fv /2 interlaced scan, but aliasing occurs in both systems if |n| > N/2. Optical defocusing and vertical defocusing of the scanning beam reduces aliasing at the expense of sharpness. As the picture changes with time the parameters Cmn and cmn in Eq. (14) become functions of time. As a consequence the frequency components of the signal shown in Fig. 21.1.11 develop sidebands just as amplitude and phase modulated carriers. A sideband with a frequency f = mFH + nFV + fmn could have been generated by the sinewave grating defined by Eq. (16) with Cmn = constant and cmn = fmnt + constant. This moving

FIGURE 21.1.10 Interlaced scan of a stationary picture with N = 5 lines per frame.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.19

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.19

FIGURE 21.1.11 Part of the frequency spectrum generated by interlaced scan of a stationary picture with N = (4p + 1) = 9 lines per frame. In proscan at line-rate 2FH, dotted line components disappear. Motion directions up, down, left, right are for sidebands associated with “carriers” at the endpoints of the Fv/2 intervals.

grating causes the amplitude at every active point x, y in the picture to oscillate with a frequency fmn. The constant phase front of the grating moves in a direction opposite to kmn with a phase velocity vmn = fmn/|kmn| = fmn λmn

ph/s

λmn = wavelength in ph

(17)

A component at a frequency f in the video signal can, however, be generated by a number of moving sinewave gratings with different velocities and spatial frequencies, and it is up to the viewer to interpret and choose between f = mFH + nFV + fmn and f = pFH + qFV + fpq. More often than not this ambiguity causes little confusion, because a moving object generates sidebands to a large number of carriers, including low-frequency carriers, with the consequence that the viewer will choose a good fit to his or her expectation of the motion. This psychophysical phenomenon is also taken advantage of in a 24 frame per second motion picture (aliasing: wagon wheels appearing as rotating the wrong way). In television, however, the confusion is accentuated by the scanning raster. Figure 21.1.11 indicates perceived direction of motion (up, down, left, right) of a spectral component, under the assumption that the viewer interprets it as a sideband to either the nearest carrier to the left or the nearest carrier to the right. The motion designation (u, d, l, r) of these frequency intervals of width FV/2 remain in proscan. Viewers tend to choose a carrier with the lowest spatial resolution, i.e., with the lowest value of |n|. As the velocity of a grating increases and |fmn| goes through FV/2 the grating may appear to reverse direction of motion. This may cause visible aliasing effects, e.g., when busy pictures are panned. However, confusion is not likely if |kmn| < (dFV/2)/|vmn| ph/s, where d = 1 in proscan and d = 1 − 2|n|/(N − 1) in interlaced scan. Thus, for a given velocity, the highest nonaliasing spatial frequency depends only on d times the frame rate, which is the same for HDTV and standard TV. For example, for d = 1 and v = 1 ph/s, the maximum nonaliasing spatial frequency is 30 cph in N/60 systems and 25 cph in N/50 systems. Motion is handled better in progressive scan (d = 1) than in interlaced scan (d < 1). If Cmn varies with time but cmn is constant there is level variation but no motion. Resolution Resolution is expressed in terms of the highest spatial frequencies kmax in cycles per (picture) height (cph), which can be conveyed by the television system and displayed given a scanning process and a transmission bandwidth. Sometimes resolution is still expressed in the old unit “TV lines,” which should mean 2kmax cph. Sometimes

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.20

TELEVISION FUNDAMENTALS AND STANDARDS 21.20

VIDEO AND FACSIMILE SYSTEMS

horizontal resolution is expressed in cycles or in TV lines per picture width, which is misleading. Sometimes it means number of “perceived” lines of a square wave test signal. Specification of resolution in terms of TV lines and measurements of resolution with square waves should be avoided. Square waves may have lost some harmonics. In the vertical direction, stationary information is sampled at No active lines per picture height. The corresponding Nyquist bandwidth No /2 cph is theoretically the maximum vertical resolution for stationary pictures. The vertical resolution in the NTSC system is kv = No/2 = 241.5 cph. The “perceived” vertical resolution of stationary pictures is less than No /2 because of many factors: aliasing, source processing, scanning spot sizes, contrast, brightness, ambient conditions, test signals, viewing distance, line flicker, luminance-chrominance cross talk, and confusion caused by the display of the scanning raster (display of high spatial frequency repeat spectra). The combination of all these factors on perceived vertical resolution is sometimes expressed in terms of some perceived vertical resolution KNo/2, where K = Kell factor < 1. The Kell factor, which has been quoted to range from 0.7 to 1. must be used with many qualifications and a great deal of caution. It is not a basic system parameter. It should preferably be avoided. In interlaced scan line flicker can be very disturbing in busy slow-moving pictures. In some receivers line flicker is eliminated by converting interlaced scan into progressive scan (“line doubling”) by motion-adaptive interpolation. The horizontal resolution of stationary pictures is kH = B/vH cph, where B is the highest frequency in Hz of the information, which can theoretically be conveyed and which modulates “the scanning beam.” For example, in the NTSC system the highest frequency of the luminance signal is B = 4,200,000 Hz. Consequently the horizontal resolution of luminance in NTSC is kH = 4,200,000/25,321 = 166 cph. The “perceived” horizontal resolution is less than kH depending on many factors: video filter transfer functions, luminance-chrominance cross talk, type of test signal, contrast, and so forth. The horizontal resolution of the NTSC chrominance signals are kI = 52 and KQ = 24 cph, which are much less than the vertical chrominance resolution (241.5 cph). When the NTSC system was developed it was observed that horizontal and vertical chrominance resolution can be reduced to about half the luminance resolution without significantly reducing perceived quality. This observation was crucial for the success of NTSC. Table 21.1.5 shows the theoretical maximum bandwidth and resolution of luminance and chrominance in various systems. In the analog TV systems chrominance resolution is significantly reduced in the horizontal

TABLE 21.1.5 Potential Resolutions in cph in the Interlaced Systems Listed in Table 21.1.2 and the HDTV Common Usage Format (The Digital CCIR System is a 4:2:2 System with a Luminance Sample Rate 13.5 MHz and Chrominance Sample Rate 6.75 MHz. In the 1125/60 System, Luminance is Sampled at 74.25 MHz and in the 1250/50 System Luminance is Sampled at 72 MHz Nyquist Maximum Frequencies and Resolutions Are Shown for all Sampled Systems) Analog (A = 4/3)

M0 /N0

Digital HDTV (A = 16/9)

Digital CCIR 6014

NTSC

PAL

525/60

625/50

750/606

1125/60

1250/50

1920/1080

2048/1152

443/483

520/575

720/480

720/576

1280/720

Max.Hor.Y Freq. in MHz

4.2

53

6.754

6.754

37.125

37.125

36

Max.Hor R – Y Freq. in MHz

1.31

1.3

3.3754

3.3754

18.5625

18.5625

18

Max.Hor B – Y Freq. in MHz

.62

1.3

3.375

3.375

18.5625

18.5625

18

Y′ Y R −Y B −Y R − Y, B − Y

Hor. Vert. Hor. Hor. Vert.

166 241.5 51.51 242 241.51,2

1953 288 50.5 50.5 288*

2704 240 1354 1354 240

2704 288 1354 1354 288

360 360 180 180 180

540 540 270 270 2705

576 576 288 288 2885

(1) Applies to I = (R − Y)cos33° − (B − Y)sin33° (2) Applies to Q = (R − Y)sin 30° + (B − Y)cos33° (3) 5.5 MHz in PAL/I and 6 MHz in SECAM. The corresponding max resolutions of Y are 214.5 and 234 cph, respectively. *144 in SECAM. (4) Standard CCIR601 luminance and chrominance bandwidths are 5.75 and 2.75 MHz respectively and A = 4/3. Corresponding resolutions are 230 and 115 cph. (5) In the HDTV systems the resolution of R − Y and B − Y is half the luminance resolution horizontally as well as vertically. (6) Progressive scan.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.21

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.21

direction, but it is not reduced in the vertical direction. In digital TV with rectangular pixels the maximum resolution is the Nyquist bandwidth in cph determined by the horizontal and vertical distances between neighboring samples. The meaning of 4:2:2 is that for every four samples of the luminance signal Y along a horizontal line there are two samples of B − Y and R − Y. In 4:2:2 systems the chrominance resolution is half the luminance resolution in the horizontal direction. In 4:2:0 systems the chrominance resolution is half the luminance resolution both horizontally and vertically. Tables 21.1.4 and 21.1.5 show a 4:2:2 version of CCIR601, while HDTV systems are 4:2:0 versions. The resolutions shown in Table 21.1.5 can be related to the luminance resolution of the HVS, which is at most 250 cph and 500 cph at viewing distances of 8 and 4 ph, respectively. It should be emphasized that the resolutions shown in Table 21.1.5 are theoretical maxima. The resolutions of the pictures displayed by a receiver depend on many factors and are usually much lower than the maximum resolutions shown in Table 21.1.5. One reason is that the signals conveyed by the system are not proportional to the tristimuli Y, R − Y, B − Y. In all systems the tristimuli are nonlinearly compressed before matrixed and bandlimited to form video signals Y¢, B¢ − Y¢, and R¢ − Y¢. Whatever the resulting resolutions are in cph, the displayed resolution in cycles per visible height is usually lower owing to the overscan needed to allow for deflection tolerances. In summary, the critical resolutions in cph, given the number of visible scan lines No, a maximum horizontal frequency B Hz, a grating phase velocity u, and a viewing distance D ph are: kmax (hor.) = B/vH,

kmax (vert.) = No/2,

kmax(move) = dFv /2v,

kmax(HVS)  2000/D

(18)

where d = 1 in proscan and 1 − 2|ky|/Nz in interlaced scan (ky = vertical spatial frequency of grating in cph). Table 21.1.5 summarizes the static resolutions of the interlaced systems listed in Table 21.1.4 and the ATSC (Grand Alliance) HDTV system formats. The IQ chrominance axes in NTSC were chosen because the HVS can better resolve chrominance detail in the I direction (approximately red-cyan) than in the Q direction (approximately blue-yellow).

Standard Frequencies in Color Television Systems Figure 21.1.12 shows that the key frequencies used in worldwide color television systems are related to a common frequency = 2.25 MHz. Multiples of this frequency are used as international standards for sample rates in HDTV and in the digital CCIR601 standard used in production and in the professional tape recording in the D1 format. The horizontal line rate in NTSC is FH = 2.25/143 MHz and in PAL and SECAM it is Fh = 2.25/144 MHz. The field rate in NTSC is 60/1.001 Hz and in PAL and SECAM it is 50 Hz. ATSC Grand Alliance HDTV formats when operating with 60 Hz field rates have frequencies related to 2.25 MHz, but when operating with 60/1.001 they are related 2.25/1.001 MHz. In the NTSC, PAL, and SECAM systems, which are compatible with black and white reception, most of the chrominance information R − Y and B − Y is conveyed by modulating one or two subcarriers to form a chrominance subchannel, which is added to (multiplexed with) the signal that carries most of the luminance information Y. The resulting signal is a composite video signal. In the FCC rules the NTSC color subcarrier frequency is specified to be fc = 315/88 MHz = 227.5 FH = 3.58…MHz. In PAL the color subcarrier is specified to be fc = 283.75FH + Fv /2 = 4,433,618,75 MHz. SECAM operates with two frequency modulated subcarriers: fob = 4.25 MHz and for = 4.40625 MHz. The subcarriers are chosen to provide acceptable compatibility with black and white reception and to minimize visible cross talk between luminance and chrominance in color reception. The peculiar frequencies in NTSC resulted from a slight modification of the monochrome standards (60 Hz and 15.75 kHz) for the purpose of reducing the visibility of a beat between the color subcarrier and the 4.5 MHz sound carrier. FH was chosen to be 4.5/286 MHz to make the beat fall at an odd multiple of FH /2 ((286 − 227.5)FH = 920 kHz). This is how the 2.25 MHz frequency came about. In digital processing of NTSC and PAL signals, including professional recording in the D2 format as well as in signal processing in consumer products, 4fc is often used as the sample rate. This sample rate exceeds the Nyquist rate somewhat in most applications, but it is readily available and phase-locked to FH and Fv. It is particularly well suited for sampling composite video signals because it performs the function of a synchronous detector separating the in-phase and quadrature chrominance component which modulate the color subcarrier. Some frequencies used or proposed for use in television are not well related to the frequencies shown in Fig. 21.1.12. One is 24 frames per second used in motion pictures. In NTSC broadcasting the movie frame rate

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.22

FIGURE 21.1.12 Relations between frequencies in standard analog and digital television systems.

TELEVISION FUNDAMENTALS AND STANDARDS

21.22 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.23

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.23

is first reduced to 24/1.001 Hz. This is followed by a so-called 3-2 pull-down used to fit four movie frames to 10 NTSC fields, alternating three fields and two fields per movie frame. In Europe it has been common to speed up the movie by a ratio 25/24, but this increases the pitch of the sound unless digital pitch-preserving nonlinear time base techniques are used to accomplish the audio speedup. With digital techniques, better frame conversions can now be made. Other frequencies that are not well related to 2.25 MHz are: digital audio sample rates 48 and 44.1 kHz (8/375 and 49/2500 times 2.25 MHz).

ANALOG VIDEO SIGNALS Gamma Pre-Correction Three signals R, G, and B emerge from the camera amplifiers and associated signal processing circuits (aperture correction, peaking, correction for camera nonlinearity, and so forth). These signals are assumed to be proportional to the tristimulus values R, G, B defined by standard or de facto standard chromaticities and the illuminant of receiver display devices (see Photometry, Colorimetry, and the Human Visual System in this chapter). These signals and the tristimuli are all normalized so that R=R

G=G B=Β

and R = G = B = 1 for reference white highlight

(19)

In television standards the signals R, G, B are usually denoted Er , Eg , Eb to distinguish them from tristimulus values. Simplified notations are used in what follows: italics for signals and romans for optical tristimulus values. True luminance and chrominance signals are: Y = Yr R + YgG + YbB M = R − Y N = B - Y

(20)

R = G = B = Y for white with luminance Y = Y

(21)

The signals M, Y, N are related to the signals R, G, B by the matrix H [Eq. (5) and Table 21.1.3]. It is noted that only the luminance values Yr, Yg, Yb appear in this matrix. Tristimuli components X and Z as well as chromaticities, while implicit in the significance of M, Y, and N in electro-optical conversion in the display are of no concern in video signal analysis. The original intent of the NTSC was to convey the “true” signals M, Y, N over separate channels. This was referred to as the constant luminance principle, because all the luminance would be conveyed by the luminance signal. Reasons for this approach are:

• compatibility with black and white receivers • luminance information requires more picture detail than chrominance • noise is more visible in luminance than in chrominance. To minimize the visibility of noise added in transmission, the luminance signal was companded in black and white television using the nonlinear transfer characteristic of the picture tube as expander in the receivers. The idea of using the picture tube as an expander has also been adopted in color television To display a red color with a tristimulus value R, a nonlinear display must be driven by a “gamma corrected” signal R′ which is a function of R. It is assumed that green and blue have the same electro-optical transfer functions and must be driven by “gamma corrected” signals R′ = F(R) G′ = F(G) B′ = F(B) where F(0) = 0 and F(1) = 1

(22)

assuming display systems with inverse electro-optical transfer functions R = F−1 (R′) G = F(1(G′) B = F−1(B′) where F−1(0) = 0 and F−1(1) = 1

(23)

In NTSC and PAL/SECAM standards, these functions are as in black and white television: R′ = Rl/g,

G′ = Gl/g

and

B′ = Bl/g

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(24)

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.24

TELEVISION FUNDAMENTALS AND STANDARDS 21.24

VIDEO AND FACSIMILE SYSTEMS

In NTSC, γ = 2.2 (“but not enforced”). In PAL and SECAM, the recommendation is γ = 2.8. Recent standards are listed as electro-optical transfer characteristics in Table 21.1.1. The nonlinear transfer characteristics of picture tubes usually differ from the production standards L = F–1(V) listed in Table 21.1.1. It is often assumed that they are close to L = V 2.5. The Luminance Signal In all current analog and digital standard television systems the gamma-corrected primary signals R¢, G¢, B¢ are matrixed in a matrix T to form signals for transmission which are proportional to: Y′ = TR R′ + TG G′ + TB B′

M′ = R′ − Y′

N′ = B′ − Y′

(25a)

TR + TG + TB = 1. When the color is gray R′ = G′ = B′ = Y′ and M′ = N′ = 0. Conversely G′ = Y′ − (TR/TG)M′ − (TB/TG)N′

R′ = Y′ + M′

B′ = Y′ + N′

(25b)

Y¢ is called the luminance signal, while signals proportional to M¢ and N¢ are called chrominance signals. In standard definition television (SDTV), including NTSC, PAL, SECAM, CCIR601, and SMPTE170M, the transmission coefficients TR, TG,TB, are equal to the luminance components Yr, Yg, Yb of the NTSC primaries (Table 21.1.2), i.e., Y′ = 0.299R′ + 0.587G′ + 0.114B′

(25c)

The coefficient should not be rounded to two decimals as they unfortunately are in the official FCC standards. In HDTV, the transmission coefficients are equal to the luminance components of the primaries associated with the system (Table 21.1.2), i.e., Y′ = 0.2125R′ + 0.7154G′ + 0.0721B′ … SMPTE274M and ITU-4 709

(25d )

Y′ = 0.212R′ + 0.701G′ + 0.087B′ … SMPTE240M

(26e)

While the signal Y′ is traditionally referred to as the luminance signal or just “the luminance” it is not a function of true luminance Y only. This happens only when the color is gray. For all other colors, part of the true luminance information is conveyed by the chrominance signals. When the chrominance (color) in a color receiver is turned off, the displayed luminance in the resulting black-and-white picture is less than in the color picture in all but originally gray areas. The ratio of the displayed luminance when the chrominance signals are turned off (compatible monochrome TV) to the displayed luminance when they are turned on can be taken as a viewer’s perception of how much true luminance is conveyed by luminance signal Y¢. This ratio is G(Y¢ )/Y, where G(V) is the transfer characteristic of the display. The ratio can be calculated given G(V), Eqs. (20) and (25), and parameters for various systems given in Table 21.1.2. The ratio becomes smaller with increased saturation of the colors, and becomes exceptionally small for saturated blue. For a display with G(V) = V 2.5 the ratio for saturated blue is 3.8 percent in NTSC, 6.2 percent in PAL/SECAM, and 2 percent in ITU-R709. For G(V) = F–1(V) according to Table 21.1.1, corresponding numbers are 7.4, 3.2, and 22.2 percent. While saturated blue is an extreme case and the ratios are sensitive to the transfer function of the display at low signal levels, it is clear that a significant amount of luminance information is conveyed by the chrominance channels. Since the chrominance channels have less bandwidth, and in HDTV also less vertical resolution than the luminance channel, luminance resolution is lost, particularly in saturated colors. Another consequence is that noise (including various coding and transmission errors and defects) in the chrominance channels is displayed as luminance noise. The HVS is more sensitive to luminance noise than to chrominance noise. These effects are consequences of the violation of the constant luminance principle caused by pre-gamma correction of the R, G, B primaries.

The Chrominance Signals The chrominance signals, also referred to as the color difference signals, are proportional to M¢ = R′ − Y¢ and N¢ = B¢ − Y¢ and defined in Eq. 25a, b, c, d, and e. The N¢ blue-yellow range ±(1 − TB) is larger than the M¢

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.25

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.25

TABLE 21.1.6 Chrominance Scale (gain) Factors KR and KB Multiplying the Basic Chrominance Signals M¢ = R¢ − Y¢ and N¢ = B¢ − Y¢ to Yield Transmitted Chrominance Signals SYSTEM NTSC, PAL, SMPTE170M SECAM* CCIR601† ITU-R709, SMPTE274M SMPTE240M

KR(R¢ − Y¢ )

KB(B¢ − Y¢ )

KR

KB

V DR CR P¢R E¢PR

U DB CB P¢B E¢PB

0.877 (1/1.14) − 1.902 0.713 (.5/.701) 0.6349 (.5/.7875) 0.6345 (.5/.788)

0.493 (1/2.03) 1.505 0.564 (.5/.886) 0.5389 (.5/.9279) 0.5476 (.5/.913)

*Normalized †Includes

frequency deviations for FM modulation. all SDTV systems when luminance and chrominance components are digitized. See MPEG standards Rec.

ITU-T H262.

red-cyan range ±(1 − TR). In transmission, the chrominance signals M¢ = R¢ − Y¢ and N¢ = B¢ − Y¢ are multiplied by scale (gain) factors KR and KB. Scale factors of digitized chrominance signals are KR = (1 − TR)/2 and KB = (1 − TB)/2 to yield scaled chrominance signal ranges of ±0.5, equal the unity range of the luminance signal Y¢. Standard chrominance scale factors are shown in Table 21.1.6. In NTSC and PAL the chrominance signals can also be represented by a vector C = [U,V] = [0.493(B′ − Y¢ ), 0.877(R′ − Y′)] = [ Ccos(c), Csin(c)] C = (U 2 + V 2)1/2 amplitude

and

(26)

c° = arctan(V/U) = phase in degrees

The chrominance vector C is shown in Fig. 21.1.15. The vertical V-axis is in a “reddish” direction and the horizontal U-axis, in a “bluish” direction, is the reference direction of zero phase. The chrominance vector is displayed in an instrument called a vector scope as illustrated at the bottom of Fig. 21.1.16. Table 21.1.7 shows Y¢, M¢, N¢, V, U, C, and c° for white and for the saturated colors yellow, cyan, green, magenta, red, and blue. These are the colors shown in the ubiquitous color bars. While the color bar signals are the same in NTSC, PAL, and SECAM standards, the colors vary from system to system as well as within each system depending on the chromaticities of the display and adjustments of primary levels (gains). In PAL the chrominance signals V = .877(R¢ − Y¢ ) and U = .493(B¢ − Y¢ ) are both bandlimited to 1.3 MHz. Because of lack of available transmission bandwidth in NTSC the V and U signals are further matrixed in a rotational network (33°) yielding the transmitted signals I = .839V − .545U and Q = .545V + .839U. The I and Q axes are shown in Figs. 21.1.15 and 21.1.16. The NTSC standards specify the maximum I-frequency to be 1.3 MHz and the maximum Q-frequency to be 0.6 MHz. The reason for this choice is that the HVS has better resolution for colors along the I-axis (red-cyan) than for colors along the Q-axis (blue-yellow). More often than not I and Q both roll off toward a max. frequency < 0.6 MHz.

TABLE 21.1.7 Luminance and Color Difference Signals for Saturated Colors (Color Bar Signal) in SDTV. The Sum of Complementary Colors (180° Apart) is White COLOR

R′ G′ B′

White YEllow CYan Green MAgenta Red Blue

1 1 0 0 1 1 0

1 1 1 1 0 0 0

1 0 1 0 1 0 1

Y′

R′ − Y′

B′ − Y′

V

U

C



1.000 0.886 0.701 0.587 0.413 0.299 0.114

0 +0.114 −0.701 −0.587 +0.587 +0.701 −0.114

0 −0.886 +0.299 −0.587 +0.587 −0.299 +0.886

0 +0.100 −0.615 −0.515 +0.515 +0.615 −0.100

0 −0.437 +0.147 −0.289 +0.289 −0.147 +0.437

0 0.448 0.632 0.590 0.590 0.632 0.488

0 +167.11 −76.56 −119.30 +60.70 +103.44 −12.89

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.26

TELEVISION FUNDAMENTALS AND STANDARDS 21.26

VIDEO AND FACSIMILE SYSTEMS

Video Component Signals While the primary signals R, G, B and the precorrected signals R¢, G¢, B¢ are video component signals, what is usually meant by video component signals is the set consisting of the luminance signal Y¢ and the chrominance signals KR(R¢ − Y¢ ) and Kb(B¢ − Y¢ ). Figure 21.1.13 illustrates an NTSC transmission system of video component

FIGURE 21.1.13 NTSC video transmission system. In PAL and SECAM drop the IQ matrices. In SECAM replace V by D¢r and U by D¢b. In digital systems, KR and KB are different. In HDTV the T matrices are different but, as in NTSC, the H and T matrices are identical.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:16 AM

Page 21.27

TELEVISION FUNDAMENTALS AND STANDARDS TELEVISION FUNDAMENTALS AND STANDARDS

21.27

signals. By dropping the IQ and IQ–1 matrices and ignoring numerical illustrations of KR, KB, TR, TG, and TB it is valid for all television systems. All SDTV systems use the same T-matrix (NTSC). The T and H matrices are identical in NTSC as well as in HDTV systems. The H matrices, which are not in the transmission path, only show conversion from R, G, B to true luminance and chrominance Y, M, N. The block with dotted borders shown in Figure 21.1.13 represents bandlimiting and impairments of the component signals in transmission. In digital television the block includes data compression. A/D and D/A conversions, contributing with quantization and compression errors, are usually also inside this block. If this block is bypassed, the transmission is perfect if the processes at the transmitter have corresponding inverse processes at the receivers. Many proposals have been made to “correct” for luminance lost in the narrowband chrominance channels. Since the early 1950s there have also been proposals to replace Y¢ [see Eq. (25)] with the compressed luminance signal F(Y ) [see Eqs. (20) and (22)], thereby conveying all the luminance in the luminance channel. No such constant luminance standards have been adopted, partly because it would not be compatible with current receivers and production standards, and partly because of added receiver costs for nonlinear expansion.

The NTSC Composite Video Signal, the HBI, and the Color Bar Signal In NTSC, PAL, and SECAM the video component signals are multiplexed to form a single signal. In NTSC and PAL the chrominance signals modulate a color subcarrier which is “inconspicuously” added to the luminance signal to form a composite video signal. SECAM uses two frequency modulated subcarriers. In NTSC there is only one color subcarrier at a frequency fc = 227.5 × FH = 315/88 MHz  3.58 MHz. It is amplitude modulated in-phase and in quadrature by the chrominance signals (QAM) and added to the luminance signal to form the composite NTSC signal: S = Y′ + Usin(wct) + Vcos(wct) = Y′ + Qsin(wct + 33°) + Icos(wct + 33°) = Y′ + Csin(wct + c°) V = .877(R′ − Y′) C=

(U2

+

V2)1/2

=

U = .493(B′ − Y′ ) (Q2

+

I2)1/2

I = .839V − .545U

Q = .545V + .839U

C° = arctan(V/U) = 33° + arctan (I/Q)

(27)

The vector diagram in Fig. 21.1.15 illustrates the instantaneous level of the color subcarrier, given the chroma vector [U,V]. A reference phase (c = 180°, “yellow”) for synchronous detection is transmitted during the horizontal blanking interval by a short burst (8 to 11 cycles) of the color subcarrier (see Figs. 21.1.14, 21.1.16, and 21.1.17. The subcarrier components are [0, V] when wct = 0, [U,0] when wct = 90°; and (−0.4,0) for the burst at wct = 270°. As time progresses the instantaneous NTSC color subcarrier amplitude progresses in the order V, U, − V, −U or I, Q, −I, −Q, as illustrated in Fig. 21.1.15. The signal S occupies active timeslots along horizontal scan lines and is bordered by horizontal and vertical blanking signals as shown in Figs. 21.1.14, 21.1.16, and 21.1.17 which also show the timing signals: horizontal and vertical sync pulses as well as the color subcarrier burst. Video signal levels in NTSC are specified in IRE units. The peak of the sync pulses is at −40 IRE, the blanking level at 0 IRE, and the reference white level at 100 IRE. The black level “setup” is at +7.5 IRE. Figure 21.1.14 includes a table with specification of durations, levels, and tolerances for NTSC and most PAL standards. Figure 21.1.14 also shows that the composite signal “inversely” modulates the main carrier of a broadcast transmitter with zero carrier (0 percent) at the whiter-than-white level of 120 IRE and 75 percent at blanking level and 100 percent at − 40 IRE (peak of sync). Table 21.1.7 shows that the peak composite signal Y¢ + C for saturated yellow, cyan, and green would exceed reference white level and overmodulate a broadcast transmitter. Similarly the lower level Y¢ − C would drop significantly below blanking level, which may cause sync problems. To transmit the composite signal S within tolerable limits it must be reduced to gsS, where gs is a gain factor 1, the recovered data burst length D during fast reading can be approximated by D = TL/n. The track length is denoted TL. Naturally, as the speed increases, the amount of data read becomes smaller. Figure 21.3.14 portrays the data recovered as a function from the search speed (relative to the reference playback speed). Because the position of the track crossings with respect to the SBs is arbitrary at high speeds, the data burst must be at least two SBs in order to retrieve one correctable SB from tape. The maximum trick-play speed nmax is then given by nmax =

TL T = L 2 Dmin

(1)

with TL expressed in SBs being the total length of a track for 180° wrap. The total overhead in a 180° track is assumed to be 40 percent, of which 15 percent is SB-based. The remaining 25 percent of the overhead can be expressed as SBs on top of the video SBs. Therefore, the total number of SBs in a track (TL) is 4/3 times the Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.76

DIGITAL VIDEO RECORDING SYSTEMS 21.76

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.3.14 Data burst length vs. search speed.

number of video SBs. Results for TL and nmax are given in Table 21.3.8. In practice, the value of nmax will be somewhat lower. Table 21.3.8 shows why the 5-5 mapping has been chosen in the DV standard. First, this mapping leads to the highest possible trick-play speed. Second, note that with this mapping, in contrast to the other possible mappings, one MB is stored in one SB, thereby enabling a fixed allocation of the most important data. For trick play, this enables the decoding on SB basis, which was assumed implicitly for the calculation of nmax. Third, as a bonus, the 5-5 mapping results in the shortest SB length. This property, combined with the fixed data allocation, proves to be very beneficial for the robustness of the system. With the chosen 5-5 mapping, the size of an SB can be determined. With 83 kbit and 135 MBs in a track for video, a 77-byte data area is required to store one macroblock. The addition of a 2-byte Sync pattern, a 3-byte ID, and 8-byte parity for the inner (horizontal) ECC results in a total SB length of 90 bytes. For completeness, it is mentioned that this number must be a multiple of 3 bytes, because of the 24-25 channel coding. The SB structure of the DV system is given in Fig. 21.3.15.

DV Video Compression It has been explained that video compression is required to establish a low bit rate and sufficient playing time. For this reason, an intraframe coding system was adopted. Intraframe coding means that each picture is compressed independent of other pictures. However, to support the high-speed search the system goes one step further and concentrates on coding segments of a picture as independent units. This will be elaborated in this section. We consider a feedforward-buffered bit-rate reduction scheme, based on DCT coding, in which the pictorial data are analyzed prior to coding. The aim is to define a compression system with fixed-length coding of a relatively small group of MBs because this is beneficial for recording applications such as trick play and robustness. Given the system constraints from previous sections, the target system is based on fame-based DCT coding with VLC. Such a system operates well using compression factors 5 to 8. This results globally in a bit rate after compression of 20 to 25 Mb/s. Numerous subjective quality experiments during development

TABLE 21.3.8 Various Mappings of MBs to SBs MB’s per segment

SB’s per segment

Video SB’s per track

Total SB’s per track

nmax

5 5 5

3 4 5

81 108 135

108 144 180

54 72 90

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.77

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.77

FIGURE 21.3.15 Format of a single sync block (SB).

of the DV standard provided evidence that about 25 Mb/s yielded the desired picture quality, which is well above analog consumer recorders such as VHS and 8 mm. DV Feedforward Coding Concept. In most transform coding systems using VLC techniques, the variablerate output is buffered and monitored by feedback quantizer control to obtain—on the average—a constant bit rate, although the bit rate is locally varying. With feedback compression systems having a locally varying output bit rate, the relation between the recovered data bits and the location of the data in the picture is lost. The major advantage of the feedforward coding system is that relatively small groups of DCT blocks, henceforth termed segments, are coded independently and, in contrast with a feedback system, as a fixed entity. This property makes the segments independently accessible on the tape, while the fixed code length ensures a unique relation between the segment location on tape and its location in the reconstructed picture. The latter property, in combination with the 1 macroblock-per-SB data allocation (see previous section on mapping), is exploited for optimizing the picture quality during trick modes (see later). In the feedforward coding system (see Fig. 21.3.16), video is first organized into blocks and subsequently in groups of blocks, called segments. Each segment is then compressed with DCT coding techniques into a fixed code length (bit cost), despite the application of VLCs. Fixed-length coding of a small group of DCT blocks (several tenths only) can only be realized if the transformed data are analyzed prior to coding, requiring temporal data storage. During the storage of a segment, several coding strategies are carried out simultaneously, from which only one is chosen for final quantization and coding. This “analysis of the limited future” explains the term feedforward buffering. Feedforward coding has two important advantages: a fixed relation between the data on tape and the reconstructed image, and a high-intrinsical robustness as the channel error propagation is principally limited within a video segment. DV Motion-Adaptive DCT. The DCT has become the most popular transform in picture compression, since it has proven to be the most efficient transform for energy compaction, whereas the implementation has limited complexity. The definition of the DCT used in the DV system is N −1 N −1

F (u, v) = C (u)C (v) ∑ ∑ f (i , j ) cos i=0 j=0

(2i + 1)uπ (2 j + 1)vπ cos 2N 2N

(2) 1

where a block of samples f (i, j) has size N × N. The two constants C(u) and C(v) are defined by C(w) = 2 , for w = 0 and C(0) = 12 2 . The DV standard applies a block size of 8 × 8 samples because it provides the best compromise between compression efficiency and complexity and robustness. One of the first main parameters for coding efficiency is to choose between intrafield and intraframe coding. In the latter system, the odd and even fields are first combined into a complete image frame prior to block

FIGURE 21.3.16 Architecture of feedforward video compression system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.78

DIGITAL VIDEO RECORDING SYSTEMS 21.78

VIDEO AND FACSIMILE SYSTEMS

coding. It has been found that intraframe coding is about 20 to 30 percent more efficient than intrafield coding, or, for the available 25 Mb/s bit rate, it offers a considerable better quality. For this reason, intraframe coding was adopted in the standard. However, it was found in earlier investigations that local motion in sample blocks leads to complicated data structures after FIGURE 21.3.17 Architecture of a motion-adaptive DCT DCT transformation, which usually are particularly diftransformer. ficult to code. The solution for this problem is to split the vertical transform into two field-based transforms of length N/2. Hence, first an N-point horizontal transform (HDCT) is performed yielding intermediate data Fh(I, v), and subsequently, two vertical (N/2)-point transforms (VDCT), specified by F(u, v) = C(u)C(v)

N / 2 −1 N −1

∑ ∑ gs (i, j ) cos i=0

F(u + 4, v) = C(u)C(v)

j =0

N / 2 −1 N −1

∑ i=0

∑ gd (i, j ) cos j =0

(2i + 1)up (2 j + 1)vp cos 2N N (2i + 1)up (2 j + 1)vp cos N 2N

gs(i, j) = [ f (2i, j) + f (2i + 1, j)] gd (i, j) = [ f (2i, j) − f(2i + 1, j)]

(3)

(4)

Note that in the first field-based output coefficient block, the sum of the two fields is taken as an input, while in the second coefficient block the difference of the two fields is considered. Hence, this corresponds to a twopoint transform in the temporal domain. The required DCT processor architecture is depicted in Fig. 21.3.17. DV Adaptive Quantization The primary elements for quantization of the coefficients F(u, v) are frequency-dependent weighting, adaptivity to local image statistics, and global bit-rate control. These elements are individually addressed briefly. The weighting is based on the decaying sensitivity of the transfer function H of the human visual system (HVS). As an example, as HVS model from the literature has been plotted in Fig. 21.3.18, in which the transfer function of the HVS has been normalized. When using that model the image is divided in 8 × 8 blocks, in Fig. 21.3.18 we find that the HVS decreases exponentially, is multiplicative in nature, and hence FW (u, v) = W(u, v)F(u, v). The weighting function can be simplified using special multiplying factors (see Table 21.3.10). For simplicity, the matrix of factors W(u, v)

FIGURE 21.3.18 HVS function of radial frequency fr.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.79

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

TABLE 21.3.9 Area Numbers of Weighting for Static 8 × 8 Block (left) and Moving 2 × (4 × 8) Block

21.79

sum

are not different for each (u, v) combination. Instead, groups of (u, v) combinations apply the same weighting factor, according to Table 21.3.9. The second element of quantization is the adapv v W(u, v) W(u, v) tivity to local image statistics. The adaptivity is to u X 0 0 11 1 2 2 u X 0 1 1 1 2 2 3 be worked out by measuring the contents of the 0 0 1 11 2 2 2 0 1 1 2 2 2 3 3 DCT coefficient block. One of the most well-known 0 1 1 12 2 2 3 1 1 2 2 2 3 3 3 metrics for local activity is the “ac energy” of the 1 1 1 22 2 3 3 1 2 2 2 3 3 3 3 block, Σ u,vF(u, v)2. However, simpler metrics, such 0 0 1 1 2 2 2 3 1 1 2 22 3 3 3 as the maximum of all F(u; v) within a block, per0 1 1 2 2 2 3 3 1 2 2 23 3 3 3 form satisfactorily as well. The DV system allows 1 1 2 2 2 3 3 3 2 2 2 33 3 3 3 that any metric in the encoder is acceptable: the 1 2 2 3 3 3 3 3 2 2 3 33 3 3 3 decoder simply follows the two decision bits reserved for the quantizer classification. This freedom also allows different quantization of luminance (Y) and color (Cr, Cb) blocks. Generally, more activity or information content results in more coarse quantization. The third element in the quantizer is a final block quantization by a linear division with a step size S. The advantage of this approach is its simplicity and it leads to uniform quantization. The variable S defines the accuracy of the global block quantization. When taking into account the elements previously discussed, the overall quantization is specified by FQ(u, v) = W(u, v)F(u, v)/S. For camcording, low implementation cost is of utmost importance. A particular simple system is obtained by taking W(u, v) = S = 2–p with p being an integer that is controlled by all three elements discussed in this subsection. The final quantization table is shown in Table 21.3.10. The weighting area numbers in Table 21.3.10 refer to Table 21.3.9. difference

TABLE 21.3.10 Table of Step Sizes Using Area Indication for Weighting and Strategy Number for Global Uniform Quantization Q class

Q strategy

0

1

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

*If

2

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

W area number 3*

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

0

1

2

3

1

1

1

1

1

1

1

2

1

1

2

2

1

2

2

4

2

2

4

4

2

4

4

8

4

4

8

8

4

8

8

16

8

8

16

16

class 3 occurs, all step sizes are multiplied by 2.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.80

DIGITAL VIDEO RECORDING SYSTEMS 21.80

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.3.19 Diagonal zigzag scanning in 8 × 8 and 2 × (4 × 8) mode for clustering of zeros.

DV Variable-Length Coding A bit-assignment using one coding table only was chosen for simplicity. The coding algorithm is fundamentally the same as in MPEG compression and is based on runlength counting of zeros only and the use of an end-of-block (EOB) codeword. The principle of the algorithm is that first the block of quantized coefficients is scanned using diagonal zigzag scanning in order to create a one-dimensional stream of numbers. The scanning is adapted to motion. The purpose of the scanning is to cluster zero coefficients (see Fig. 21.3.19), so that they can be coded efficiently. Second, from the start of the string, zeros are counted until a nonzero coefficient is noticed. The magnitude of this nonzero coefficient is combined with the preceding length of zeros into a single event (run, amplitude), which is jointly coded with a single codeword. The sign of the coefficient is appended at the end of the codeword. An excerpt of the encoding table showing the variable wordlengths and codewords is given in Table 21.3.11. The coding table is optimized to prevent large codewords and low implementation cost.

Total DV Video Performance The control and overall performance of the compression system is now indicated. The system optimizes the picture quality by testing a number of quantizer settings in parallel. The optimal quantization strategy mopt is the quantizer setting (strategy) that yields a bit rate just below or equal to the desired bit rate. The choice of mopt can vary between 0 and M − 1 when M different quantization strategies are used. The picture quality of the compression system for various segment sizes K and number of quantization strategies M was measured. The resulting picture quality is expressed in SNR (dB), which refers to the mean squared error (MSE) with the original picture compared to squared peak white (2552). The results of the measurements are shown in Fig. 21.3.20. The optimal choice of the segment size K and the number of strategies M can be derived from Fig. 21.3.20. Evidently, the recording system designer opts for the smallest possible value of K (small segments), because it yields a high robustness and it enables higher search speeds. However, Fig. 21.3.20 shows that if the size becomes too small, i.e., K < 30, the picture quality deteriorates rapidly. For K = 30 to 60 DCT blocks, the resulting SNR remains nearly constant. Therefore, a segment size of K = 30 was adopted as being the best compromise. The 30 DCT blocks are not arbitrarily chosen from the picture, but clustered in groups, called MBs. An MB (see Fig. 21.3.13) is a group consisting of 2 × 2 DCT blocks of 8 × 8 Y samples each and the two corresponding 8 × 8 color blocks Cr and Cb. In order to improve the picture quality of the compression system, MBs are selected from different areas of the picture, so that the influence of local image statistics is smoothed. The result is a stable and high picture quality for a large class of images. However, after compression, the MBs are redistributed in order to improve the visual performance during search (see later in this DV section). A second conclusion is that M = 16 gives a substantial improvement in picture quality, compared to M = 8. The quality improvement can be fully explained by a more efficient use of the available bit rate, which becomes particularly important for small segment sizes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.81

DIGITAL VIDEO RECORDING SYSTEMS 21.81

DIGITAL VIDEO RECORDING SYSTEMS

TABLE 21.3.11 Table of Wordlengths (a) and Codewords (b) of DV System Run 0 0 1 2 3 4 5 6 7 8

11 11 12 12 12 12 13 13 13

Amplitude (abs.) 1 2 3 2 4 5 6 6 7 7 8 8

3 5 7 8 8 9 9 12 …

4 7 8 9 9 10 11 12 …

4

5

6

7

8

9

10

11

12

13

14

15

16

4 7 9 10 11

5 8 9 10 12

5 8 10 11

6 8 12 12

6 9 12

7 10 12

7 10 12

7 10 12

8 11

8 11

8 11

8 12

8 12

coeff. sign not incl. EOB = 4

×

(a) Event

Codeword

Event

Codeword

(0, 1) (0, 2) EOB (1, 1) (0, 3) (0, 4) (2, 1) (1, 2) (0, 5) (0, 6) (3, 1)

00s 010s 0110s 0111s 1000s 1001s 10100s 10101s 10110s 10111s 110000s

(4, 1) (0, 7) (0, 8) (5, 1) (6, 1) (2, 2) (1, 3) (1, 4) (0, 9) (0, 10) (3, 11)

110001s 110010s 110011s 1101000s 1101001s 1101010s 1101011s 1101100s 1101101s 1101110s 1101110s

(b)

FIGURE 21.3.20 SNR of DV compression system for various segment sizes K and number of strategies M(CCIR-601 images, 4:2:0 sampling, 25 Mb/s).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.82

DIGITAL VIDEO RECORDING SYSTEMS 21.82

VIDEO AND FACSIMILE SYSTEMS

The subjective picture quality of the system is known as excellent and regarded as very close to levels for professional use. For regular input pictures, the SNR approaches 40 dB, and the resulting subjective image quality of the system comes rather close to the quality of existing professional recording systems. For complex and detailed imagery, the SNR is a few dBs lower.

Macroblock-Based SB Format The DV format uses a special compressed data format inside a sync block (SB) to improve robustness for highspeed search, where a part of the error-correction coding (ECC) cannot be applied, because only data portions of a track are recovered. This considerably multiplies the chance of having errors in the signal. Second, at higher tape speeds, the head-to-tape contact is reduced and less stable, which also leads to a lower robustness. The special data format enables the compression decoder to cope with residual errors in the video data. In order to construct a robust format, it is absolutely essential to limit the propagation of errors that emerge from erroneous variable-length decoding. The propagation of errors is limited in three ways, which are briefly discussed. First, fixed-length coding of segments is realized by the choice of a feedforward coding scheme. Every segment is compressed into a fixed bit cost, so that the decoder should periodically reset itself at a segment border. A proper numbering of SBs allows to identify the start of a new segment without using the compressed data. Error propagation from segment to segment is therefore impossible. Second, identification of individual macroblocks is enabled. A segment consists of five full-color MBs as indicated in the previous section. For robustness, a single MB is put into a single SB. Note that the MB is sometimes smaller than a SB and sometimes larger. Furthermore, every SB has a fixed unique location on tape. As a result, each MB—at least the low-frequency information—can be addressed. Third, identification of individual DCT blocks is possible. Within an SB (thus one MB) six DCT blocks are located. Each compressed DCT block is of variable length. Similarly, to the MBs, by putting the lowfrequency data of each DCT block on a fixed position, each DCT block can be addressed and partially decoded, and error propagation is limited to high-frequency components of DCT blocks only. The internal SB format is depicted in Fig. 21.3.21. A group of five SBs forms a fixed-length segment, preventing error propagation. As a bonus, the fixed-length segment compression allows replacement of individual segments for post editing or error concealment in the compressed domain, without decoding the full picture. The individual segments can also easily be transmitted over a digital interface (IEEE-1394) to other equipment. Figure 21.3.21 also shows the fixed positions of the start of each DCT block.

DV Data Shuffling for Trick Play The DV standard applies special data shuffling techniques to optimize the picture quality both for normal playback and for high-speed search on tape. It was elucidated in preceding subsections that a coding unit, called video segment, consists of five MBs only. When considering MBs for shuffling, the following general statement applies. For smoothing statistics, a regular distribution over the picture area of the MBs for one segment results in the highest average picture quality for all practical video material. This subsection describes MB shuffling in more detail, taking the 625/25 system as an example.

FIGURE 21.3.21 Inner SB data format showing the fixed predetermined positions of low-frequency DCT coefficients of each block and the construction of segments using these sync blocks.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.83

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.83

FIGURE 21.3.22 Selection of MBs for segment construction and assignment of picture areas on tracks.

Data Ordering for Optimal Picture Quality at Normal Speed. As depicted in Fig. 21.3.13, a picture consists of 480/576 lines of 720 pixels wide. With 10/12 tracks per frame, it is clear that the data of 48 lines or 135 MBs is stored in one track. In the case of the 625/25 video system, the picture of 36 by 45 MBs is divided into 12 horizontal rows of three MBs high and 45 MBs wide, where each row corresponds to one track. On the other hand, one segment consists of five MBs, which should originate from distributed parts of the picture, preferably with the highest distance. A maximum horizontal distance is achieved when they are distributed regularly, leading to a horizontal pitch of nine MBs. Consequently, the picture is divided into five columns of nine MBs wide. The row-column structure is depicted in Fig. 21.3.22. Despite the difference in video lines, the 525/30 picture is divided similarly in 10 rows and five columns. In the vertical direction, a regular distribution with maximum distance is also the optimal for a good picture quality. Taking into account the 10/12 rows for both systems and a universal algorithm, a distance of two rows is the best option. A unit of 3 by 9 MBs is called a superblock. A picture consists of superblocks Si,j with i, j being the row and column number, respectively. The numbering of the MBs within the superblock can be found in Fig. 21.3.23. The construction of a superblock for the 525/30 system is somewhat different because of modified MB dimensions (see Fig. 21.3.13). Since the distances are known now, a suitable algorithm has to be defined such that the five MBs forming one segment are spread out over the picture area. The following algorithm has been adopted: 4

Vi ,k = ∑ MB[(i + 2 p) mod n, 2 p mod 5, k ]

(5)

p= 0

with 0 ≤ i < n − 1, 0 ≤ k ≤ 26, and n = 10 or 12. Furthermore, M [i, j, k] denotes the macroblock number k in superblock Si,j. The MBs forming segment V0,0 are indicated in Fig. 21.3.22.

FIGURE 21.3.23 Ordering of macroblocks in repetitive clusters called superblocks.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.84

DIGITAL VIDEO RECORDING SYSTEMS 21.84

VIDEO AND FACSIMILE SYSTEMS

Macroblock Mapping for Trick Play. Editing and picture reconstruction in normal play are performed on a picture basis, so that the mapping can be optimized for trick play. Consequently, the data of one picture is recorded in an alternative order within the boundaries of the set of tracks assigned to the complete picture. The following aspects are used in the mapping: Coherency and speed adaptivity. Coherency means that the best subjective trick-play picture quality is obtained if the largest possible coherent picture parts are refreshed in one piece.18 The picture quality should be adaptive to the speed, meaning that the highest quality is achieved for the lowest search speeds. At the introduction of the MBs, it was shown that the recovered data burst length from tape increases for lower speeds (see Fig. 21.3.14). Consequently, neighboring pictorial information should be recorded side by side in a track. The previous insight contradicts with the MB shuffling for efficient video encoding at normal speed requiring that the pictorial information should be spread out over the picture. As indicated in Fig. 21.3.21, the majority of the information for one MB is stored in one SB. This is an attractive property for trick play. It enables decoding on a SB rather than on a segment basis, resulting in a slightly reduced but still adequate picture quality for trick play. This feature allows for a block mapping on MB basis instead of segments, thereby solving the paradox between MB shuffling and actual mapping on tape. The selection of the optimal mapping for a large set of trick-play speeds is a difficult issue, especially if the possibilities for alternative scanners have to be considered simultaneously. The highest flexibility in all situations is achieved by a more or less direct projection of the picture on the tape. This is realized by recording the superblocks in one row of Fig. 21.3.22 one after the other in a track, with the row number equal to the track number. MBs within a superblock are stored in the order indicated in Fig. 21.3.23. A schematic representation of the final mapping is given in Fig. 21.3.24. On top of this mapping, trick-play speeds are chosen carefully at special fractional noninteger values, leading to sufficient refresh of data over time. More details can be found in Ref. 18. In general, it can be concluded that the search speed should be such that all other picture parts are refreshed between the updates of a specific part. Conclusion on DV System It has been clarified that the adopted cassette size and tape thickness have determined the required compression factor and the resulting bit rate of 25 Mb/s after compression. At this bit rate, sufficient playing time can be obtained, both for an ultrasmall camcorder cassette and the larger desktop cassette. For desktop video recording, the trend is clearly to disk-based recording, which is discussed later in this chapter. The compression system has been optimized for low-cost intraframe coding because of the use in portable equipment. The feedforward coding architecture is different from MPEG-based recording in DVD, but it offers high robustness for high-speed search and error concealment. The compression in fixed-length segments offers besides robustness also easy packet transmission over digital links. The system yields a high picture quality, even in the case of motion, owing to a special motion adaptivity. The independent compression and coding of segments, based on five macroblocks only, allows for high search speeds during trick play and provides a very robust tape format. In normal play, the powerful R-S product ECC ensures an error-free and flawless reproduction of the recorded images. A special macroblock format mapped

FIGURE 21.3.24 Reorganization of macroblocks on tape for trick play so that the picture is stored in coherent form on tape.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.85

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.85

onto single channel sync blocks enables data recovery at very high search speeds. Even at high search speeds, the format limits error propagation severely and shows only errors in high-frequency information. Moreover, the special data shuffling scheme enables a relatively high perceptual performance during trick-play modes.

CD-I AND VIDEO CD Introduction to Optical Recording Optical recording can support much higher packing densities than magnetic recording. A typical optical recorder has a 1.6 mm track spacing, and a minimum recorded wavelength of 1.2 mm. A single side of a 30-cm optical disc can record as much as 60 min of video. Several systems are in use since the early 90s. The first system is designed for mass replication. An expensive master produces numerous very cheap copies, in a similar manner to phonograph discs. However, the process is more refined because of the smaller dimensions involved. This system is highlighted subsequently in more detail. The second system uses an irreversible change produced by laser heating selected spots in the medium. The change may be melting, bubble formation, a dye change, or a chemical change. The systems in which writing is irreversible are termed write once or WORM (write once, read many times) systems. A typical example is the Panasonic 20-cm optical recorder that stores nearly 14 min of an NTSC signal. The disc rotates at 1800 rpm and the system can either record a single spiral track, or 24,000 concentric tracks. The third method uses laser heating to define the small recorded area, but the recording is magnetic and therefore erasable. The system is similar to WORM systems except that the disc is coated with a thin magnetic layer. Like any magnetic recording, the signal can be erased or it can be overwritten. Such a system using analog recording can store over 30 min of NTSC television per side of a 30-cm disc. If digital recording is used, 50 s on each side of the 13-cm disc can be recorded. We describe herein briefly a type of system that is suited for mass replication. There have been several approaches to the home optical disc system, including one pioneered by Philips, named Laservision. The Philips video disc consists of a transparent plastic material with a diameter of 30 cm and a thickness of 2.5 mm for a double-sided recording. The audio disc has grooves, whose walls move a transducer to reproduce the audio signals. The video disc must meet a requirement of much higher information density. It has no grooves and has tracks with a much finer structure. The track spacing is about one-sixtieth of that of an audio disc. The rotational speed is synchronous with the vertical frame rate to enable recording of one complete frame per revolution (1800 rpm for NTSC, 1500 rpm for PAL and SECAM). A FIGURE 21.3.25 (a) The pits in an optical disk of longer-playing version of the Philips system maintains a Laservision or CD with tracks of about 0.5 mm width, constant track-to-pickup velocity, allowing approximately (b) the analog waveform in Laservision is clipped and the transitions defining pits and walls on the disc, (c) 1 h per side of recording. the conversion of a binary string into pits and walls in Let us now briefly look into the basic optical readout reflective layer on the compact disc where each trantechnology. In both Laservision and the CD-based system, sition corresponds with a “1” in the binary signal. the signal is recorded in the form of a long spiral track, conBetween each binary 1, every flat distance of 0.3 mm sisting of pits of about 0.5 mm width (see Fig. 21.3.25 (a)). represents a binary 0. The reflective layer is the backThe pits have been pressed into a plastic type of substrate side of the optical disc.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.86

DIGITAL VIDEO RECORDING SYSTEMS 21.86

VIDEO AND FACSIMILE SYSTEMS

with a special stamp. The stamp originates from a parent disc that was generated by a real laser-burned disc. The recorded layer is protected by a transparent coating. Since the coating is much thicker than the recorded layer, scratches and dirt on the surface of the disc are outside the focal plane of the optical pickup that reads out the data. An essential difference between Laservision and all CD successors is that Laservision registers analog and CD holds digital information. Figure 21.3.25(b) depicts how the analog signal is recorded on a Laservision disc. The video signal is frequency modulated (FM) and an analog sound signal is superimposed on the FMmodulated video signal. The resulting signal is clipped for not exceeding maximum positive and negative signal values. The transitions of the signal coincide with the pits and walls on the disc. A predecessor of the video-CD was a system called CD-Video of Philips that recorded a digital audio signal in special modulation area on top of the FM-video signal. Figure 21.3.25(c) portrays the recording of digital data on the CD. The length of each pit or wall is 0.3 mm or a multitude of this length with a certain maximum. With a special coding algorithm, called eight-to-fourteen modulation (EFM), the number of zeros between two ones is kept between a pre-defined minimum and maximum value. Such a code is often referred to as a run-length-limited (RLL) code. The bounding of zeros helps in robustness by avoiding DC energy and it helps with synchronization. As can be noticed, each transition corresponds with the bit value 1 and the intermediate bits get the bit value 0. These bits are called channel bits and they originate via a coding procedure from the data bits. The data bits for digital audio (CD-DA) are resulting from 16-bit audio samples that represent a stream of binary numbers, called pulse code modulation (PCM). An illustration of the focus motor and detection optics is illustrated in Fig. 21.3.26. To track the line of very narrow pits by a closed-circuit servo like the one depicted in the illustration, two auxiliary light beams are formed, which are slightly displaced from the centerline of the track, one on each side of the correct position. Two photodiodes are introduced into the optical paths on either side of the quadrant detectors, as portrayed in Fig. 21.3.26. The error signal generated by the difference in output of these diodes, after appropriate filtering, is used to deflect a galvanometer mirror and thus move the beam laterally to correct the error. Similar techniques can be used for optical tracking, where the signal envelope of recovered waveform after opto-electrical transformation by receiving photodiodes can be kept as high as possible. Figure 21.3.27 depicts an outline of the signal path of the compact disc. Assuming correct servo control and focus of the laser beam, the electrical signal coming from photodiodes is amplitude controlled by a gain amplifier.

Laser and beam-forming optics

Cylinder Lens

Beam splitter Quadrant Photodetector Focus motor and objective lens

Transparent protective layer

Recorded surface

Disc

FIGURE 21.3.26 Example construction for focusing the laser beam on the optical disc.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.87

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.87

The output in the form of a digital string of channel bits is buffered and demodulated by the EFM demodulator. This involves a mapping of groups of 14 channel bits and three merging bits into consecutive bytes. Subsequently, these bytes are buffered in the error-correction decoder (ECC). This ECC decoder recomputes data packets and corrects possible errors. The decoder consists of a ReedSolomon block decoder combined with a so-called cyclic interleaved redundancy check (CIRC). The FIGURE 21.3.27 Block diagram of signal path in compact latter step involves data interleaving to spread out disc. errors and cut the length of long burst errors into shorter pieces that can be handled by the ReedSolomon decoder. After the complete ECC decoding, the audio samples come available as PCM data and are available for digital-to-analog conversion (DAC).

History and CD-i Concept The CD format was invented in 1980 and was introduced in Europe and Japan in 1982 and in the United States in 1983. This originally introduced format was CD-audio (CD-A, CD-DA, or “Red Book” CD, as specified in the International Electrotechnical Commission IEC-908 standard available from the American National Standards Institute). Additional CD formats were subsequently introduced: CD-ROM (1984), CD-i (1986), CD-WO [CD-R] (1988), Video-CD (1994), and CD-RW (1996). The primary focus in this section is on the compact disc interactive (CD-i). Even if CD-i is not very much used anymore, its importance to the evolution of the digital multimedia recording systems aimed at applications such as home entertainment, education, and training has been fundamental. It should be noted that the strong development of the CD-based formats was the fundament for the recent success of the DVD,5 which is discussed later in this chapter. Furthermore, the CD-i format resembles very much the video-CD format that is currently very popular in Asia. CD-i combines audio, video, text, graphics, and animation, while providing features such as interactivity. At the time of its introduction, CD-i provided several advantages over PC-based interactive systems, some of which still hold today:

• Cost compared with that of building a PC with the same audiovisual performance and functionality. • Compatibility, since each CD-i disc is compatible with every CD-i player. There are no “system requirements,” such as a type of display adapter, sound card, version of the operating system, screen resolution, CD-ROM drive speeds, drivers, hardware conflicts, and so forth, like in the PC-based scenario. • Ease-of-use. A CD-i player and software are very easy to use and do not require software setup, adjusting the hardware or other complex installation procedures. Moreover, CD-i can be connected to a variety of devices, such as TV sets and stereosystems. As an additional advantage, the user interfaces resemble those of CE devices, making it a far more comfortable system for many people to use over a PC. • Worldwide standard. CD-i is a worldwide standard, crossing the borders of various manufacturers and TV systems. Every disc is compatible with every player, regardless of its manufacturer or the TV system (PAL, NTSC, or SECAM) that is being used. CD-i System Specification The CD-i players are based on a 68000 CPU running at least at 15 MHz, with at least 1 Mbyte of RAM, a single speed CD-drive, dedicated audio and video decoding chips, at least 8 kbyte of nonvolatile storage memory, and a dedicated operating system called CD-RTOS, which stands for compact disc real-time operating system. CD-RTOS is based on version 2.4 of Microware’s OS-9/68K operating system that is very similar to the Unix operating system, and supports multitasking and multiuser operation. The operating system as well as other player-specific software such as the player’s start-up shell are hard coded in a ROM of at least 512 kbyte.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.88

DIGITAL VIDEO RECORDING SYSTEMS 21.88

VIDEO AND FACSIMILE SYSTEMS

CD-i Sector Format CD-i is able to retrieve in real time the audiovisual data stored on the disc and send this information to the appropriate decoder ICs, without putting a heavy load on the overall system performance. Hence, a CD-i player does not need much RAM or processing power, since all audio and video decoding is performed in real time without storing large amounts in RAM for later decoding. To enable the simultaneous retrieval of both audio and video information, data are interleaved on the CD-i disc. Since a CD-i disc is read at a constant continuing speed, the designer needs to be aware of the audio or video bit-stream quality. For instance, when a lower audio quality is used, fewer sectors will be occupied than with a higher quality. Alternatively, it is also possible to read only the sectors belonging to one audio channel at a time, and then move back to the beginning of the disc and read the sectors of another audio channel. Since a CD-i disc lasts for 74 min and the lowest audio quality only uses one out of every 16 sectors, the audio can be digitally recorded for (16 × 74 min) over 19 h. Because of this real-time reading of sectors, every CD-i player reads data at the same speed, sometimes referred to as normal speed or single speed. It would be unnecessary to make a CD-i player running at a higher speed CD-drive, since data are to be read in real-time according to the specifications (thus single speed) and audio, video, and animation would be out of sync when being read at a higher speed. Special attention has been paid to the development of encoding techniques that enable high-quality audio and video within the single data speed and hence resulting in a longer playing time, instead of using a high-speed drive and by such reducing the playing time. For CD-ROM, the mode 1 sector format is defined that allows for 2048 bytes of user data in every sector, with an accompanying 280 bytes of error correction information in each sector. When data are read at 75 sectors per second (the normal CD speed), this results in a data rate of 150 kbytes per second. For CD-i, it is not always necessary to have error correction in each sector. For example, audio and video need a much lower degree of correction than data. Instead, the 280 bytes used for error correction in mode 1 could be added to the 2048 of user bytes, resulting in 2324 bytes of user data per sector. This larger sector size results in an improved data rate of about 170 kbyte per second, which is referred to as mode 2. Within mode 2, two forms were defined: form 1 does incorporate the original error correction and is used for data and form 2 which lacks the error correction is used for multimedia. Mode 2 added an additional header to the header of mode 1, which holds information about the type of data that are contained in a sector (audio, video, data, and so forth), the way it is coded (for example, which audio level is used), and an indication of the used sector form. This header is interpreted by the CD-i system for each sector, which is then processed by the appropriate decoders. Both forms of mode 2 sectors can be interleaved, so that program data and audio and video can be read instantaneously from the disc. Note that when all sectors are form 1, the disc holds 648 Mbyte. When all sectors are form 2, the capacity is 744 Mbyte. CD-i’s disc capacity can hence be between 648 and 744 Mbyte. Although a CD-i disc consists of only mode 2 sectors, a CD-i system must be able to read mode 1 sectors on CD-ROM discs, and of course the audio sectors that are defined for CD audio. Physical Dimensions and Specifications A CD disc is 120 mm in diameter (60 mm radius), with a hole 15 mm diameter (7.5 mm radius) and a thickness of 1.2 mm. Starting at the hole edge at 7.5 mm radius, there is a clamping area extending from 7.5 to 23 mm radius (this area is partly clear and partly metalized, and may include a visible inscription stamped by the manufacturer), then a 2 mm wide lead-in area extending from radius 23 to 25 mm (containing information used to control the player), then the 33 or 33.5 mm wide data area (program area) extending from radius 25 to a maximum of c. 58 mm, a lead-out area (which contains digital silence or zero data) of width 0.5 to 1 mm from radius starting maximally at c. 58 mm, and finally at c. 1 mm unused area extending to the outer edge.

CD-i Disc Structure A CD-i disc is divided into tracks. A CD-i disc contains at least one CD-i track, and may also optionally contain additional CD-Audio tracks that may also be played on a standard CD-Audio player. The first 166 sectors

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.89

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.89

of the CD-i track are message sectors, followed by the disc label. Subsequently, an additional 2250 message sectors follow that contain a spoken message in CD-Audio format, which informs users who put the disc in a regular CD-Audio player about the possible damage to equipment or speakers when the disc is not taken out immediately. Usually, a modern CD-Audio player will recognize the CD-i track as a data track and will not play it, so you won’t hear the message. The disc label contains some specified fields that offer a lot of information about the disc, such as the title and creator, but also the name of the CD-i application file that needs to be run at start-up. Furthermore, the disc label contains the file structure volume descriptor, which is loaded into RAM at start-up. This allows the system to find a certain file on a CD-i disc in only one stroke. After these message sectors and disc label, the actual CD-i data start. CD-i Audio A minimal configuration (denominated also “Base Case”) of a CD-i player should be able to decode standard PCM audio as specified for CD-Audio, as well as a dedicated audio coding scheme called adaptive delta pulse code modulation (ADPCM). The difference with PCM is that audio is not stored individually per time segment, but that only the difference (delta) to the previous sample is recorded. Owing to the existing correlation between adjacent samples, a significant decrease in the used storage space on the disc can be achieved, and hence in the bit stream being read from the disc. When normal PCM CD-Audio would be used (which occupies all successive sectors), this would not leave room for video or animations to be read without interrupting the audio playback. CD-i provides three levels of ADPCM audio, all of which can be used either in mono or stereo, as shown in Table 21.3.12. Note that level A provides Hifi quality, Level B gives FM radio quality, while Level C is for voice quality. Level C mono can be used for up to 16 voice channels, e.g., in different languages. The sector structure facilitates switching between languages on the fly, as the sectors are interleaved on the disc. Thus, when ADPCM Level C is used, only 1 out of every 16 sectors needs to be used for audio, leaving all other sectors for other data such as video or animation. It is also possible to record different audio channels at once, allowing for the seamless switching between, e.g., various languages. The disc may also be read from the beginning while decoding a different audio channel, allowing for increased audio-playing times, as indicated in Table 21.3.12. A CD-i player equipped with a digital video cartridge is also able to decode MPEG-1 Layer I and II audio. MPEG is far more efficient in coding audio, resulting in an even longer playing time while providing a highly increased audio quality when compared to ADPCM. This is because of the fact that MPEG audio is based on precision adaptive subband coding (PASC),35 which uses perceptual coding to only store those audio signals that are audible, while filtering out other signals. Note that CD-i offers a very flexible way of using MPEG audio (for example, at various bit rates and quality levels), but cannot decode MPEG-1 Level III, or MP3 files.

CD-i Video The video image of a CD-i player consist of four “planes,” which are overlaid on top of each other. The first plane is used by a cursor and its size is limited to 16 × 16 pixels. The second and third planes are shown TABLE 21.3.12 Different ADPCM Formats Format CD-audio PCM

Frequency (kHz) 44.1

Numbers of bits per sample 16

ADPCM Level A stereo ADPCM Level A mono ADPCM Level B stereo ADPCM Level B mono ADPCM Level C stereo ADPCM Level C mono

37.8 37.8 37.8 37.8 18.9 18.9

8 8 4 4 4 4

Used sectors all sectors 1 in 2 sectors 1 in 4 sectors 1 in 4 sectors 1 in 8 sectors 1 in 8 sectors 1 in 16 sectors

Recording time up to 74 min up to 2.4 h up to 4.8 h up to 4.8 h up to 9.6 h up to 9.6 h up to 19.2 h

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.90

DIGITAL VIDEO RECORDING SYSTEMS 21.90

VIDEO AND FACSIMILE SYSTEMS

underneath the cursor and are used for full screen images. The fourth plane is used for a single-colored background or for MPEG full motion video (or to display video from an external source on some players). Parts of an image on one of the middle two planes can be transparent, so that the underlying plane becomes visible. This can be used, for example, to show subtitles or menus on an image. Both planes can also be used for blending and fading effects. There are various encoding techniques for video that can be used in CD-i, which are indicated below. DYUV. DYUV or Delta YUV is used for the encoding of high-quality photographs and other natural images. It is based on the fact that the human eye is more sensible to differences in brightness than to differences in color. Therefore, it stores one color for a set of pixels, and a brightness value for each pixel. The result is an image of slightly more than 100 kbyte. Owing to the complexity of a DYUV image, the storage on the disc must take place in advance, and it cannot be created nor modified in the player. DYUV is used mostly in CD-i titles because of its high quality and efficient storage. RGB555. RGB555 is a compression format that allows only 5 bits per R, G, and B value, resulting in a picture with a maximum of over 32,000 colors. Since RGB555 uses both planes to display the image, it cannot be used in combination with other graphics. An RGB555 image is roughly 200 kbyte in size. The image can be altered by the player at run time. RGB555 is actually never used in regular CD-i titles because of its inefficiency and limitations in usage. CLUT. CLUT, or color look-up table, is a compression method aimed at coding simple graphics. The colors used in a certain picture are stored in a CLUT-table, which reduces the size of the image dramatically, because color values refer to the appropriate CLUT-entry instead of indicating, for example, a 24-bit color value. In CD-i, a CLUT image can have an 8-bit (256 colors), 7-bit (128 colors), 4-bit (16 colors), or 3-bit (8 colors) resolution. Run Length Encoding (RLE). RLE is a variation of the CLUT compression method that besides storing the CLUT-color table in an image, further reduces the image size by storing certain “run lengths” of repeating horizontal pixels with the same color. The results are usually pictures between 10 and 30 kbyte in size. This makes RLE ideal for compressing animations that contain large continuous areas with similar colors. QHY. QHY, or quantized high Y, is an encoding technique that combines DYUV and RLE, resulting in a very sharp high-quality natural image, that is displayed in CD-i’s high-resolution mode. A QHY image is usually about 130 kbyte in size. Since it consists of a DYUV component, it cannot be modified by the player. QHY is, for example, used to display the images of a photo-CD in high resolution on a CD-i player. CD-i can display both main planes in either normal, double, or high resolution, which are 384 × 280, 768 times a DYUV image is always standard resolutions. It is possible for the images on each of the planes to be displayed 280 and 768 × 560 pixels, respectively. Some encoding techniques are limited to a single resolution, for example at once, even if they are in different resolutions. For example, a double-resolution CLUT4 menu bar can be overlayed on a standard resolution DYUV image. CD-i highest resolution (768 × 560 pixels), used for QHY images, is the highest resolution that can be made visible on a normal TV set. To enable audiovisual data to be coded at CD-i bit-rates, MPEG-1 compression has been employed. This standard allows CD-i to display 384 × 280 progressive video sequences, at a video quality roughly comparable to standard VHS.

Full Motion CD-i Players The quality of the “Base Case” CD-i players can be further extended to “Full Motion” audiovisual quality.9 For this, two coding formats are defined for video sectors: “MPEG video” and “MPEG still picture” for storage of MPEG-coded progressive video and interlaced still picture data. One new coding format is defined for audio sectors: “MPEG audio” for storage of coded MPEG audio data. All these video, still picture, and audio sectors are a form 2 sector with a usable datafield of 2324 bytes, while the Full Motion application program is stored in a normal data sector. To code audio, CD-i applies MPEG layer I and layer II. Audio can be coded in stereo and in mono, at varying bit rates. For instance, for speech and other audio not requiring a high quality, the lowest bit rate for mono

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.91

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.91

of 32 kbit/s can be used. For high-quality audio, a bit rate of 192 kbit/s in stereo is necessary to achieve similar quality to the compact disc. Audio can be coded in stereo, joint stereo (i.e., intensity stereo), dual channel, or single channel. All bit rates allowed by MPEG for these layers are supported. The applied audio sampling frequency is 44.1 kHz. The bit rates can vary from 32 to 448 kbit/s for layer I and from 32 to 384 kbit/s for layer II. For more information, see Ref. 9. The Full Motion system in CD-i41 supports the video parameters defined in the video part of the MPEG standard, but some parameters have a larger range in the CD-i system. For instance, while in the MPEG-1 specification the bit rate should not exceed 1.856 Mb/s; in the Full Motion system a maximum bit rate of about 5 Mb/s is allowed. Also, in the Full Motion system, video coding with a variable bit rate is allowed, leading to a higher visual quality. The maximum picture width is 768 pixels and the maximum height is 576 lines. The supported picture rates are 23.976, 24, 25, 29.97, and 30 Hz. The maximum size of the VBV buffer is 40 kbyte. More information on the specific video parameters employed for CD-i Full Motion video can be found in Ref. 9. Digital video is displayed on the background plane and can be overlayed with images coded in CLUT or RLE format. The system part of the MPEG standard applies a multiplex structure consisting of packs and packets. In each MPEG video sector, still picture sector, and audio sector, one pack is stored. Each pack consists of a pack header and one or more packets containing a packet header followed by data from one audiovisual stream. More information can be found in Ref. 9. The authoring system provided by the CD-i41 allows to optimize the audiovisual compression parameters to the requirements of the application. On a CD-i disc multiple MPEG audio and video streams can be recorded in parallel, e.g., for applications requiring audio in different languages. Moreover, during the authoring process, trade-offs can be made between the number of streams, the bit rate, the audio or picture quality of each stream, and, in the case of video, the picture size. Compatibility to Other Formats A CD-i player only plays discs that incorporate a dedicated application program that was designed for CD-i’s operating system and hardware components. For some disc types, such as video-CDs, photo-CDs, and CDBGM, this CD-i application is a mandatory part of the disc’s specification. Next to this, the CD-i standard requires the player to be able to play back standard CD-Audio discs. Differences between Video-CD and CD-i. A video-CD is a compact disc with up to 75 min of VHS quality video with accompanying sound in CD quality. Audio and video are coded according to the MPEG-1 standard and the disc layout (see Fig. 21.3.28) is based on the CD-i Bridge (see Fig. 21.3.29) specification to allow for the playback on a variety of plackback devices such as CD-i players and dedicated video-CD players. Video-CD became very popular mainly in Asia, while elsewhere, it is mainly used as a prototype tool. Although video-CD compatibility is not required for DVD-video players, it is very likely that video-CD playback

FIGURE 21.3.28 Video CD—disc data allocation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.92

DIGITAL VIDEO RECORDING SYSTEMS 21.92

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.3.29 CD-i bridge.

functionality is included since every DVD-video player must be able to decode MPEG-1 as well. Another difference is that the resolution of the MPEG video on a CD-i movie disc is slightly higher than the defined resolution of a video-CD disc (384 × 280 for the CD-i “Base Case” players instead of 352 × 240 for video-CD). This also prevents extracting the video from a CD-i disc in order to burn a video-CD. To do this, reencoding of the video according to the White Book (video-CD) specification is necessary, leading to a decreased picture quality.

BRIEF DESCRIPTION OF MPEG VIDEO CODING STANDARD The Meaning of I, P, and B Pictures MPEG video coding is described here because it applies both to VCD (and CD-i) and the DVD system that will be described next. Many good overviews of the MPEG video coding standards can be found in a variety of books13 and articles.11 The MPEG video compression algorithm relies on two basic techniques: blockbased motion compensation for the reduction of the temporal redundancy and DCT-based compression for reducing spatial correlations. The DCT-based compression has been described in detail in the section on DV recording. The intraframe DCT coder of DV comes close to the MPEG video processing with respect to DCT, quantization, and VLC coding. Therefore, the focus is in this section on exploiting the temporal redundancy. In MPEG, three picture types are considered: intra pictures (I), predicted pictures (P), and bidirectional prediction pictures (B). Intra pictures are intraframe coded, i.e., temporal correlation is not exploited for the compression of these frames. I pictures provide access points for random access but achieve only moderate compression, because as their name indicates, compression is limited within the same picture. Predicted pictures, or P pictures, are coded with reference to a past I or P picture and will in general be used as a reference for a future frame. Note that only I and P pictures can be used as a reference for temporal prediction. Bidirectional pictures provide the highest level of compression by temporally predicting the current frame with respect to both a previous and current reference frame. B pictures can achieve higher compression since they can handle effectively uncovered areas, since an area just uncovered cannot be predicted from the past reference, but can be properly predicted from the future reference. They also have the additional benefit that they decouple prediction and coding (no error propagation is incurred from the prediction errors of B pictures). In all cases, when a picture is coded with respect to a reference, motion compensation is applied to improve the prediction and the resulting coding efficiency. The relationship with respect to the three picture types is illustrated in Fig. 21.3.30. The I pictures and the B and P pictures predicted based on it form a group-of pictures (GOP). The GOP forms an information layer in MPEG video and is from the data point of view buildup as shown at the lower side of Fig. 21.3.30. First, the I picture is coded at the start and kept in a memory in the encoder. These memories are indicated at the bottom in Fig. 21.3.31. Second, the next

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.93

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.93

FIGURE 21.3.30 A group-of-pictures (GOP) divided in I, P, and B pictures.

P picture is selected and coded with reference to the I picture. The pictorial result after local decoding is also stored in a second memory. Third, the two B pictures in between are processed in sequential order. Each B picture is coded depending on the past I picture and the near P picture in future. This explains the term bidirectional. When the B pictures in between have been processed, the process repeats itself and the next P picture is first coded and stored as reference, and so on. Note that MPEG video coding as a result changes the transmission order of pictures over the channel (see bottom of Fig. 21.3.30 for the modified order). MPEG Coding and Motion Compensation Coding and processing with reference to a past and future picture involves motion compensation (MC). This means that the motion of objects is taken into account and predicted and only the difference between

FIGURE 21.3.31 MPEG video encoder block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.94

DIGITAL VIDEO RECORDING SYSTEMS 21.94

VIDEO AND FACSIMILE SYSTEMS

the temporal prediction and the actual video data is coded (see Fig. 21.3.31). In order to predict the motion of video data, it is first measured in an initial step, called motion estimation (ME). Motion compensation and estimation in MPEG are based on block-based processing. Hence, the blocks of video samples in actual and reference picture are compared. The typical block size for comparison in MPEG is a macroblock of 16 × 16 pixels. There is a trade-off between the coding gain due to motion compensation and the cost of coding the necessary motion information (prediction type, motion vectors relative to the reference pictures, and so forth). Hence, in MPEG, the choice of 16 × 16 macroblocks for the motion compensation unit is the result of such a trade-off. The motion information is coded differentially with respect to the motion information of the previous adjacent macroblock. The differential motion information is likely to be small, except at object boundaries, and it is coded using a variable-length code to provide greater efficiency. To reduce the spatial correlation in I, P, and B pictures, DCT-based coding is employed. After the computation of the transform coefficients, they are quantized using a visually weighted quantization matrix. Subsequently, after quantization the coefficients are zigzag scanned and grouped into (run, amplitude) pairs. To further improve the coding efficiency, a Huffman-like table for the DCT coefficients is used to code (run, amplitude) pairs. Only those pairs with a high probability of occurrence are coded with a VLC, while less likely pairs are coded with an escape symbol followed by fixed-length codes, in order to avoid extremely long codewords and reduce the cost of implementation. All these steps are indicated in Fig. 21.3.31. The quality of video compressed with the MPEG-1 algorithm at rates about 1.2 Mb/s is comparable to VHS quality recording. The quality of MPEG-2 video is higher, but it is also operated at a higher bit rate between 3 to 5 Mb/s. This is discussed further in the next section about DVD.

THE DVD SYSTEM Introduction and History The DVD system was introduced in 199614 and has experienced a strong growth of interest and sales in the consumer area. The key factors of its success are the very high quality of the reproduced video and audio signals and the fact that the storage is based on a high-capacity optical disc medium. The latter factor builds clearly on the success of the compact disc technology, albeit with strongly improved densities to boost the total storage capacity of the disc. The success of DVD results also from the perfect fit with the trends that are taking place in the consumer and computer industry. In the 90s, computer applications have been expanding continuously toward high-quality audio and video recording and playback functions. For example, video compression of MPEG-110 and MPEG-212 have been adopted gradually and the CD audio playback function evolved to CD-ROM and later CD-R /CD-RW recording subsystems in the multimedia computers. Nowadays, DVD-ROM has become an integral part of the modern multimedia computer, thereby solving (temporarily) the ever increasing request for more storage capacity on a removable medium. DVD-video players share most of the same physical and mechanical components with DVD-ROM drives in the computer. In the coming years, DVD recording (R/RW), which is already introduced, will occupy a significant part of consumer and general computing applications. A DVD holds 4.4 to 15.9 GB (gigabytes), which is the same as 4.7 to 17 billion bytes. The capacity improvement of the typical DVD of 4.7 GB compared to CD (of 0.65 GB) results from different factors, which are shown in Table 21.3.13. The expansion to larger capacities results from extra bit layers, which will be elucidated in the next subsection. The DVD capacity was tuned to satisfy one of the main applications: storing a full-length and full-quality movie on a single disc. Over 95 percent of the Hollywood movies are shorter than 2 h 15 min, so that 135 min was taken as a guideline for the design of a digital video disc system. Uncompressed movies, e.g., with 4:2:2 10-bit sampling, run at 270 Mb/s and consume more than 32 Mbyte/s, requiring more than 250 gigabytes storage capacity. With resampling on a somewhat lower resolution to 124 Mb/s and MPEG-2 video coding offering a compression factor of 30, the average bit rate is reduced to 3.5 Mb/s for the video signal. Additionally, on the average, three tracks of audio consuming 384 kb/s each are added to come to 4.4 Mb/s average bit rate.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.95

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.95

TABLE 21.3.13 Key Parameters of CD and DVD Standard Where Improvement Factors in Both Density and Channel Coding Lead to an Improvement of a Factor of 7 in Capacity CD Pit length Track pitch Data area surface Channel modulation ratio Error correction ratio Sector overhead ratio

0.83 mm 1.6 mm 8605 mm2 8/17 EFM 1064/3124 (34%) 278/3390 (8.2%)

DVD 0.40 mm 0.74 mm 8759 mm2 8/16 EFM+ 308/2366 (13%) 62/2418 (2.6%)

Factor 2.08 2.16 1.02 1.06 1.32 1.06

Unlike the DV format, almost everything in the DVD standard is variable. The previous discussion on playing time was entirely based on average numbers. Let us consider some variations on the application. If the movie producer skips two audio tracks, the playing time increases to a near 160 min. The difference can, e.g., be used to increase the picture quality. The maximum bit rate can be as high as 9.8 Mb/s for the video and the total bit rate including audio is limited to 10.08 Mb/s. This maximum rate can be sustained and would lead to a playing time of 62 min. On the other side of the spectrum, if a movie is recorded in MPEG-1 at about 1.2 Mb/s, the playing time becomes more than 9 h. This discussion applies to DVD-video. When considering DVD-ROM, the flexibility in usage and the recorded data is infinite. If a new video coding technique is found in the future, the disc may be used to store more than 3 h of high-quality film.

Bit Layers in DVD The increase in storage capacity from 4.7 to 15.9 gigabytes results from a new technology that was applied in the DVD system, where information bits can be stored in more than one layer. The laser that reads the disc can be focused at two different levels. Hence it can read the top layer or it can look through the top layer and read the layer beneath. This process is enabled by a special coating on the top layer, which is semireflective, thereby allowing the laser wavelengths to pass through. The reading process of a disc starts at the inside edge and gradually moves toward the outer edge, following a spiral curve. The length of this curve is about 11.8 km. If the laser reaches the end of the first layer, it quickly refocuses on the second layer and starts reading backwards. The switch to the second layer takes place in the order of 100 ms, which is fast enough to match this electronically with data buffering in order to prevent hiccups in presentation of the video or audio data. The reading process of bits from the disc is faster than the decoding of it to compensate for time losses from refocusing. Possibilities for various forms of reading and layering are shown in Fig. 21.3.32. First, DVDs can be single-sided or double-sided. Second, each side can have one or two information layers, indicated with the tiny square waves in Figs. 21.3.32(a) through 21.3.32 (e). The information layers are in the middle of the disc for protection and enabling the double-sided options. A double-sided disc is constructed by binding two

FIGURE 21.3.32 Various configurations of bit layers and sides: SS = single side, DS = double side, SL = single layer, DL = double layer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.96

DIGITAL VIDEO RECORDING SYSTEMS 21.96

VIDEO AND FACSIMILE SYSTEMS

TABLE 21.3.14 DVD Capacities Resulting from the Various Combinations of Layers, Sides, and Disc Sizes. The Triangles Represent the Focused Laser Beam. CD-ROM is Added for Reference (Assuming Four Times the Speed) Type

Diam.

Sides

Layers

DVD-5 DVD-9 DVD-10 DVD-18

12 cm 12 cm 12 cm 12 cm 8 cm 8 cm 8 cm 8 cm 12 cm

SS SS DS DS SS SS DS DS SS

SL DL SL DL SL DL SL DL SL

CD-ROM

Gigabytes 4.70 8.54 9.40 17.08 1.46 2.66 2.92 5.32 0.68

Pl.-time/(h) 2.25 4.0 4.50 8.0 0.75 1.25 1.5 2.5 0.25

stamped substrates back to back. Single-sided discs have a blank substrate on one side. There are even more possibilities, since DVDs can also vary in size, i.e., by using 8 cm or 12 cm diameter for the disc. The most important options are shown in Table 21.3.14. Physically, each substrate has a thickness of 0.6 mm. The thickness of a DVD is thus 1.2 mm so that it equals the thickness of a CD. The glued construction of the DVD makes them more rigid than a CD, thereby enabling more accurate tracking by the laser and thus a higher density. A special topic is backward compatibility to the CD-ROM format. Although this is not required, the key manufacturers see this point of vital importance, because of large market presence of CD formats. One of the problems is that with CD, the data are within a stamped metal layer deep inside the disc (virtually at the backside), whereas in DVD it is in the middle. The focusing is therefore different. This leads to a player with three focusing levels. Another problem is that CD discs do not well reflect the 635 to 650 nm wavelength transmitted by a DVD player. This sometimes leads to extra lasers or optical devices. A last aspect is the compatibility with data formats (VCD, CD-i, photo CD, and so forth) requiring extra electronics inside data decoding circuitry. In practice, most DVD players have backward compatibility with VCD and sometimes with one or more CDR/ RW formats.

The DVD Disc The disc is read from the bottom, starting with data layer 0. The second layer, called layer 1, is further from the readout surface. The layers are actually very closed, spaced about 55 mm from each other. The backside coating of the deepest layer is made from metal (such as aluminium) and is thus fully reflective, while it has a semitransparent coating on top. Layer 1 has a semireflective coating to enable reading of layer 0. The first layer is about 20 percent reflective and the second layer 70 percent. The substrate is transparent. The laser light has a wavelength of 650 or 635 nm (red laser). The lens has a numerical aperture of 0.60 and the optical spot diameter is 0.58 to 0.60 mm. Figure 21.3.33 portrays the key areas of the disc. The burst cutting area is for unique identification of a disc via a 188-byte code and can be used for automatic machines with multiple disc storage capacity (e.g., a juke box). Table 21.3.15 shows relevant parameters for recovering bits from a DVD-ROM or DVD-video. What strikes the eye here is the robustness: a scratch of 6 mm can be covered by the ECCs, which clearly outperforms the CD channel coding. FIGURE 21.3.33 Physical 12 cm DVD disc area parameters.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.97

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.97

TABLE 21.3.15 Key Physical Bit Reading Parameters for DVD-ROM Pit length SL Pit length DL Track pitch Average data bit length Average channel bit length Velocity Scanning speed

0.40–1.87 mm 0.44–2.05 mm 0.74 mm 0.267 (SL)/0.293 mm (DL) 0.133 (SL)/0.147 mm (DL) 570–1630 rpm (574–1528 rpm data) 3.49 m/s (SL), 3.84 m/s (DL)

DVD Data Format The logical data format of DVD is indicated in Table 21.3.16. A key parameter is the easy to use sector size of user data of 2048 bytes that nicely fits with computer applications. The modulation code is called EFMPlus, a more efficient successor of the EFM code that was applied for the CD. This code enables a high density of bits on the disc, by limiting the number of zeros between two transitions. In this case, the code bounds the number of zeros between 2 and 8, between each group of ones. The modulation also helps keep DC energy low and provides synchronization in the channel. The ECC is a powerful R-S product code having considerably less overhead than in the CD. By using row-column processing of the rectangular data sector block of 192 by 172 bytes, it is well suited for removing burst errors resulting from scratches and dirt and so on. Note that the gross number of user data here is 11.08 Mb/s, instead of the 10 Mb/s earlier. The extra 1 Mb/s is available for extra insertion of user data such as subpictures and the like. The following figures explain the data sector construction in more detail. The construction of a DVD data sector is portrayed by Fig. 21.3.34. The sector consists of 12 rows of 172 bytes each. The first sector starts with identification bytes (sector number with error detection on it) and copy protection bytes and the last sector ends with four EDC bytes for a payload ECC. Figure 21.3.35 depicts the conversion from data sectors into ECC blocks. All data sectors together then form a block of 192 rows of 172 bytes each. The rows of the ECC block are interleaved to spread out possible burst errors, thereby enhancing the removal of those errors. Looking vertically in Fig. 21.3.35 each of the 172 columns of the ECC block gets an outer parity check data of 16 bytes assigned to it. This extra data forms the outer parity block. Similarly, for each of the 208 rows (192 + 16 bytes), a 10-byte inner parity check is computed and appended. The power of the R-S product code is such that a (random) error rate of 2 percent is reduced to less than one error out of 1015 bits (million-billion). The product code works on relatively small blocks of data, resulting in a maximum correctable burst error of approximately 2800 bytes, which corresponds to about 6 mm scratch length protection. To come to recording sectors, each group of 12 rows of the complete ECC block gets one row of parity added to it, leading to a spread of parity codes as well. Thus, a recording sector consists of 13 (12 + 1) rows

TABLE 21.3.16

Data Format Parameters of DVD

Data sector (user data) Logical data sector size Recording sector size Unmodulated physical sector Modulated physical sector Modulation code Error correction code ECC block size ECC overhead Format overhead User/Channel data rate

2048 bytes 2064 bytes (2048 + 12 header + 4 EDC) 2366 bytes (2064 + 302 ECC bytes) 2418 bytes (2366 + 52 sync bytes) 4836 bytes (2418 × 2 bytes) 8–16 (EFMPlus) R-S product code (208,192,17) × (182,172,11) 16 sectors (32,768 bytes user data) 15% (13% of recording sector) 16% (37,856 bytes for 32,768 user B) 11.08/26.16 Mb/s

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.98

DIGITAL VIDEO RECORDING SYSTEMS 21.98

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.3.34 Construction of a DVD data sector.

of 182 (172 + 10) bytes. A recording sector is split in the middle and 2 bytes are inserted for synchronization and 2 sync bytes are added at the beginning of the first half. This leads to two parts of 2 + 91 bytes (total 186 bytes) and 13 rows, which form 2418 bytes together. This block is modulated to 4836 bytes by the EFMPLus code.

DVD-Video The DVD-video system is an application of the DVD-ROM standard and aims at playback of high-quality movies together with multichannel audio. Presently, this is one of the mostly accepted standards of the DVD products. DVD-video consists of one stream of MPEG-2 video coded at variable rate, up to eight channels of Dolby digital multichannel audio or MPEG-2-coded multichannel audio or linear PCM audio, and up to 32 streams of subpicture graphics with navigation menus, stills pictures, and control for jumping forward and backward through the video program. With respect to video resolution, the format intends to serve and output both analog standard resolution TV as well as high-quality digital TV and possibly in future digital HDTV. Many multimedia computers can playback DVD-video as well.

FIGURE 21.3.35 Data sectors mapped into DVD ECC blocks that are subsequently converted into recording sectors.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.99

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.99

FIGURE 21.3.36 Block diagram of DVD player with CD compatibility.

The system diagram of DVD-video is depicted in Fig. 21.3.36. The diagram depicts audio data reading on disc with either CD audio data decoding or via the DVD demultiplexing of audio streams. The video results from demultiplexing the DVD data into the various data streams. Audio is decoded with MPEG or Dolby digital decoders. The amount of postprocessing is growing continuously and may involve noise reduction, sharpness improvement, and even 100 Hz conversion (Europe). The NTSC/PAL encoder ensures seamless connection to existing TV receivers. The data flow and buffering operates as follows. The channel bits from disc are read in a constant 26.16 Mb/s rate. The EFMPlus 8-16 demodulator halves this to 13.08 Mb/s and this is reduced after ECC to a constant user data stream of 11.08 Mb/s. Data search information (DSI) is copied from this stream prior to writing this in a track buffer. A so-called MPEG program stream of variable rate is recovered at 10.08 Mb/s, which contains five different packetized elementary streams (PES): video, audio, subpicture, presentation control information (PCI), and the data search information (DSI). The latter two streams are system overhead data. The 10.08 Mb/s data rate is the maximum rate of the video and audio stream together. Video may be coded as MPEG-1 or MPEG-2 streams and has the form of a PES stream; audio, if coded as MPEG, is also a PES stream. The PCI and DSI streams are mapped as MPEG private streams. Another private stream contains the subpicture data and alternative audio data. Since audio consumes data rate, the video is limited to either 9.8 Mb/s for MPEG-2 and 1.85 Mb/s for MPEG-1. The pulse code modulation (PCM) audio stream is maximally 6.14 Mb/s, which can consist of eight channels audio sampled at 16 to 48 kHz. Note that each AV

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.100

DIGITAL VIDEO RECORDING SYSTEMS 21.100

VIDEO AND FACSIMILE SYSTEMS

decoder has its own buffers for keeping data at the input. The buffer sizes for video, audio, subpicture, and PCI are 232, 8, 52, and 3 kbyte, respectively.

DVD Audio and Data Audio streams in DVD are based on three different possibilities as follows:

• PCM audio. This is a straightforward stream of digital samples of audio without compression, referred to as LPCM. The minimum bit rate is 768 kb/s and for multiple channels, it can be maximally 6.144 Mb/s. The typical bit-rate is 1.536 Mb/s for 48 kHz sampling frequency. • Dolby digital. This is based on the Dolby digital coding scheme, having a minimum, typical, and maximum bit rate of 64, 384, and 448 kb/s, respectively. The coding scheme is also known as AC-3. There can be one, two, three, four, or five channels with an optional subwoofer channel (all together indicated as Dolby 5.1). All Dolby digital decoders are required to downmix 5.1 channels to two channels stereo PCM and analog output. • MPEG audio. MPEG-1 audio layer II is based on coding the audio spectrum in individual subbands. It is limited to 384 kb/s bit rate, the typical rate is 192 kb/s. MPEG-2 audio can extend this from 64 kb/s to maximally 912 kb/s. The typical rate is 384 kb/s. At maximum, seven channels can be used (7.1 with subwoofer). The advanced audio coding (AAC) mode is optimized for perception and is part of MPEG-2 audio coding. The primary audio track of DVD is always MPEG-1 compatible. The subpicture data of 10 kb/s may grow to 3.36 Mb/s maximally, whereas the DTS data can have a rate up to 1.536 Mb/s. Typically, the rates for this type of data are rather limited in capacity. Various coding types and parameters can be used in the above options. Coding algorithms can be besides the MPEG equal to LPCM, AC-3, DTS, and SDDS. The last two modes involve optional multichannel formats (e.g., 5.1) using compression on PCM-based channels. Audio sampling can be performed at 16, 20, or 24-bit resolution. Sampling rate is 48 or 96 kHz and the dynamic range compensation (DRC) of the audio signals can be switched on and off. Along with certain applications, a code can be used, such as surround (multichannel HQ audio), karaoke (for singing with supporting subtitling), or an unspecified code. The data are organized in files. Each side of the disk contains one logical volume, using one partition and one set of files. Each file is a contiguous area of data bytes to support easy transfer and decoding. The complete list of constraints applies to the normal DVD-video application that are used at playback. However, the format allows to record special data files that do not satisfy the above constraints. This is allowed if they are formatted into the DVD-video “Data” stream, behind the regular data that are required for playback. Then a specially designed player can access the extra data. This option allows many special applications in the future.

DVD Conclusions DVD is an evolutionary product that successfully builds further on the widely accepted and distributed optical recording technology of the CD. The principal advantage is the excellent video and audio quality recorded on a single disc. Table 21.3.17 provides the main system parameters of DVD compared with DV and D-VHS. The table clearly shows that VHS is lagging behind in picture quality and that the digital formats of DV and DVD are comparable with each other. The compression factor of DVD is high but the quality is approaching the DV quality in spatial resolution. The MPEG compression relies heavily on the motion compensation, which is not used in DV. Thus the temporal quality of DV clearly outperforms that of DVD and this explains why there are semiprofessional DV systems (e.g., DVCPRO) for portable studio applications. The coming years will show further penetration of recordable DVD formats in the market (DVD-RW, DVD + RW, DVD-R36 with the above-mentioned resolution and quality. Meanwhile, even higher capacity optical discs are being developed to pave the way for HDTV and to stay comparable with computer hard disks.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.101

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

TABLE 21.3.17

21.101

DVD Compared to Other Existing Recording Systems DV

Video

Component digital

Playing time

0.5–1 h (mini) 4.5 h (standard) 5–50 Gbytes

Data capacity Compression Data rate

Error correction Channel modulation TV video systems

Intraframe DV factor 5 25.146 Mb/s video 1.536 Mb/s audio R-S product code + track interleave 24-25 code 525/60/2:1 29.97 Hz

D-VHS Composite analog or digital bit stream 2–6 h (VHS) 4–40 h (D-VHS) 32–40 Gbytes (300–400 m tape) External (MPEG-2) 28.2 Mb/s HD, 14.1 Mb/s STD, 2–7 Mb/s LP R-S (inner/outer) NA NA

625/50/2:1 25 Hz V 525 resolution

720 × 480 (346 kpixels)

V 625 resolution

720 × 576 (415 kpixels)

Audio

2 tracks of 2 channels (32 kHz, 12-bit nonlinear PCM) or 1 track of channels (32/44.1/48 kHz 16-bit PCM) 72 dB (12-bit), 96 dB (16-bit) Up to 70–90 times, with complete picture

Audio SNR Trick play, searching

DVD Component digital 2.25 h (1 layer) 4.5 h (2 layers) 4.7–17 Gbytes MPEG-1/MPEG-2 factor 30 Up to 9.8 Mb/s combined VBR or CBR V + A R-S product code EFMPlus (8–16 block) 525/60, 1:1 24 Hz/ 2:129.97 Hz 625/50/2:1 25 Hz 1:1/24 Hz 720 × 480 (346 kpixels)

320 × 480 (154 kpixels) VHS 320 × 580 (187 kpixels) VHS Analog VHS stereo HiFi or digital bit stream (MPEG/Dolby like)

8 tracks of up to 8 channels each (LPCM/ Dolby digital/MPEG-2)

40 dB(mono), 90 dB HiFi Up to 10–20 times, with distorted picture (VHS)

96–144 dB Jumping to I pictures several speeds, stills

720 × 480 (415 kpixels)

NETWORKED AND COMPUTER-BASED RECORDING SYSTEMS Personal Video Recorder (PVR) The final part of the chapter is devoted to new emerging technologies and products in recording of which most are related to further applying the computer technology into consumer systems. The personal video recorder (PVR) is a TV recorder that records on a computer hard disk as opposed to tape. The PVR is also called personal digital recorder, digital video recorder, digital network recorder, smart TV, video recording computer, time-shifted television, hard disk recorder, personal television receiver, television portal, or on-demand TV. PVR has evolved as a direct consequence of digital recording and low-cost hard disk storage. The advantages of hard-disk-based recording are:

• Allowing random access to content • Metadata can be stored along with the content (allowing easier searching, selection and management of content, segmentation of content, richer interaction, and so forth

• Easier transfer and redistribution of content Some of the features provided by the PVR justify its popularity in first product trials.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.102

DIGITAL VIDEO RECORDING SYSTEMS 21.102

VIDEO AND FACSIMILE SYSTEMS

Simultaneous record and replay, allowing “trick” modes such as pausing and rewinding live TV. This allows pausing of a live TV program when the phone rings, later viewing of a program that has already started but is not yet finished, “fast-forward” through parts of programs when watching a time-shifted program, and so forth. Sophisticated interactive multimedia information services, regular program and news updates that are instantly available to consumers. For instance, downloads of electronic program guides (EPGs) with the TV signal or via the phone network simplify program selection, browsing (by time, by channel, and so forth), and recording. Automatic indexing of recorded programs allowing easy retrieval and manipulation of content. “Intelligent” agents that can automatically record or recommend TV programs that they think the viewer might want to watch. For instance, the EPG can also include metadata describing the programs (e.g., the actors, the genre) or a brief text summary, that can be used by the PVR to manage the recording process. Consequently, the PVR can use this metadata to automatically record or recommend similar types of programs the users have recorded in the past or preferences they have entered as part of the set-up procedure. Nonlinear viewing of programs that were recorded in a linear manner. The viewing is performed by, e.g., skipping certain segments. Several brands of PVRs exist—TiVo, Replay TV, Ultimate TV, Axcent, and so on. Subsequently, we will explain some of the technology and issues behind the PVR, by giving some implementation examples. Content-Format I/O and Storage. The various PVR brands use different formats for the I/O and for the storage format. This choice is very important since it influences the overall PVR architecture, the necessity for content protection as well as the resulting quality of the stored content. For instance, TiVo’s I/O interfaces are analog, even for the digital services, thereby eliminating issues such as conditional access and scrambling. For storage, the analog signals are converted to digital and then compressed using MPEG2 coding. This implies that in the case digitally broadcasted content is stored using TiVo, the audiovisual quality can potentially be decreased by the cascaded encoding/decoding process. To solve this problem, other PVR configurations store the original broadcast digital signal in scrambled form on its hard disk, giving the potential for better audiovisual quality and the possibility for the conditional access operator to control the replay of stored programs. On the positive side, however, because TiVo compresses the content, easy trade-offs between quality and storage capacity can be made. For example, if a TiVo has 40 Gigabytes of hard disk capacity, this represents about 40 h storage at “basic” quality or about 12 h at “best” quality. It should be noted, however, that the “best” quality is limited by the incoming audiovisual content quality. Event- Versus Time-Driven Programming. To better understand the differences between these types of programming, let us consider two different implementations. For the TiVo PVR, data are downloaded daily via the telephone network, and depending on the availability of the data, TiVo can have program details for up to two weeks in advance. The TiVo data are entirely time-driven, such that the program is stored based on the a priori available time information. Alternatively, other PVRs are event-driven, i.e., detect the program junctions. For instance, Axcent PVR uses a 64 kb/s channel on the satellite transponder to which the PVR tunes by default to keep recording when a program runs late. Software Downloading to Existing PVRs for Enabling New Services. Different services can be enabled using the PVR functionality. For instance, “trailer selection” services are expected to appear enabling the viewer to simply press a button when a trailer for a program is presented, and the PVR will automatically record the program whenever this is broadcast. This can be done using the program metadata (see next paragraph). Furthermore, other e-commerce services can also be enabled via similar mechanisms, as described below. Metadata. Since the introduction of the PVR, it was argued that the scheduling of the programs at attractive hours will lose its power to attract viewers, because the PVR can present desired video programs that were already stored on its hard disk on-demand. If scheduling will indeed lose its power to attract the viewers, its place will probably be taken by the metadata and services that support PVR functions. Consequently, the battle for the attention of the viewers will be won by the provider that can describe programs and services most

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.103

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.103

fully and attractively using metadata. Moreover, with a PVR it is possible for the viewer to skip all advertisements. Nevertheless, the content still has to be paid for, such that new mechanisms are necessary to encourage the viewers to watch commercials. To enable this, ubiquitous metadata can be employed to target advertisements at specific interest groups. Furthermore, PVR’s combination of storage, processing power, and supporting metadata offers potential for new kinds of programs (e.g., educational), where the viewer interactively navigates around the stored program. Note that standards are necessary for describing multimedia content, i.e., metadata, which is required by these services. MPEG-7 provides such a standard37 (see the “MPEG-7” section). Furthermore, while many PVRs are already existing on the market, proprietary solutions restrict the user to a single service or content provider and lock together the broadcaster, service provider, and the PVR manufacturer. Consequently, to enable PVR proliferation, standards are necessary for metadata to choose programs, for finding the programs in a multichannel, web-connected broadcasting environment, for management of the rights that must be paid for in order to support the services, and so forth. These standards are generated in a new worldwide body, the TV-Anytime forum (see “TV-Anytime” section).

Brief MPEG-7 Overview As mentioned in the PVR section, digital audiovisual recorded material is increasingly available to users in a variety of formats. For instance, persistent large-volume storage that allows nonlinear access to audiovisual content, such as hard-disk storage in powerful PC platforms and personal video recorders (PVR), is becoming available in many consumer devices. Consequently, there is a need for rapid navigation and browsing capabilities to enable users to efficiently discover and consume the contents of the stored audiovisual programs. Users will also benefit from having nonlinear access to different views of a particular program, adapted to the user’s personal preferences, interests, or usage conditions such as the amount of time the user wants to spend in consuming the content or the resources available to the user’s terminal. Such adaptability will enhance the value provided by the multimedia content. The MPEG-7 standard,38 formally named Multimedia Content Description Interface, provides a rich set of standardized tools to describe such multimedia content (see Fig. 21.3.37). A good description of the MPEG-7 standard can be found in Refs. 37 and 38. In this section, we will summarize only those MPEG-7 components that can be useful in finding, retrieving, accessing, filtering, and managing digitally recorded audiovisual content. The main elements of the MPEG-7 standard37,38 are presented below.

• Descriptor (D) is a representation of a feature. A descriptor defines the syntax and the semantics of the feature representation.

• Description Scheme (DS) specifies the structure and semantics of the relationships between its components, which may be both Ds and DSs.

FIGURE 21.3.37 MPEG-7 standardization role.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.104

DIGITAL VIDEO RECORDING SYSTEMS 21.104

VIDEO AND FACSIMILE SYSTEMS

• Description Definition Language (DDL) is a language to specify description schemes. It also allows the extension and modification of existing DS. MPEG-7 decided to adopt XML Schema Language as the MPEG-7 DDL. However, DDL requires some specific extensions to XML Schema Language to satisfy all MPEG-7 requirements. • Binary representation provides one or more ways (e.g., textual, binary) to encode descriptions. A coded description is a description that has been encoded to fulfill relevant requirements such as compression efficiency, error resilience, random access, and so forth. MPEG-7 offers a comprehensive set of audiovisual description tools that can be used for effective and efficient access (search, filtering, and browsing) to multimedia content. For instance, MPEG-7 description tools allows the creation of descriptions of content that may include:37

• • • • • • • • •

Information describing the creation and production processes of the content (director, title, short feature movie) Information related to the usage of the content (copyright pointers, usage history, broadcast schedule) Information of the storage features of the content (storage format, encoding) Structural information on spatial, temporal, or spatio-temporal components of the content (scene cuts, segmentation in regions, region motion tracking) Information about low-level features in the content (colors, textures, sound timbres, melody description) Conceptual information of the reality captured by the content (objects and events, interactions among objects) Information about how to browse the content in an efficient way (summaries, variations, spatial and frequency subbands, and so forth) Information about collections of objects Information about the interaction of the user with the content (user preferences, usage history)

The process of generating metadata for content description that can later be used for retrieving, accessing, filtering, and managing digitally recorded audiovisual content is portrayed in Fig. 21.3.38. Note that the MPEG-7 metadata can be either physically co-located on the same storage system with the associated audiovisual content or could also be stored elsewhere. In the latter case, mechanisms that link multimedia and its MPEG-7 descriptions are needed. Example: MPEG-7 Usage in PVR Applications For a better understanding of the MPEG-7 importance to the field of digital recording, let us consider its benefits in a PVR application. In this application, content descriptions could be generated by a service provider,

FIGURE 21.3.38 Generating metadata for content description.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.105

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.105

separate from the original content provider or broadcaster. Certain high-level content descriptors, such as the program name and channel name, can be downloaded by the PVR in advance to provide an on-screen electronic program guide (EPG). This EPG enables the user to efficiently navigate at the program level and record programs easily. Moreover, summary descriptions can be made available in advance as well (e.g., in the case of movies), or downloaded at the end of a program (e.g., in the case of sports events). Furthermore, low-level descriptions that describes the various features of the content can also be provided. These descriptions can include information for video transcoding (i.e., transcoding hints), that can be used by the PVRs or transcoding proxies whenever transcoding from the “source” (service provider) content quality to a lower quality that can be used for local storage is desired. Additionally, other descriptions may be generated by the user, e.g., by marking highly entertaining segments for later review. Such a feature can be simply provided by the PVR by copying the XML-fragment associated with the selected segment, including its name and locators and storing this element separately, along with some high-level elements that will allow its easy identification at a later time. Such fragments can be exchanged with friends or relatives. MPEG-7 is also developing description schemes that can capture the user’s preferences with respect to a specific content, and store them on the PVR under the user’s control. These schemes support personalized navigation and browsing, by allowing the user to indicate the preferred type of view or browsing, and automatic filtering of content based on the user’s preferences. TV Anytime The TV Anytime Forum42 is an international consortium of companies dedicated to producing standards for PVRs. TV Anytime aims at developing a generic framework that incorporates standards, tools, and technologies for an integrated system providing a multitude of services such as movies on demand, broadcast recording, broadcast searching and filtering, retrieving associated information from web pages, home banking, e-commerce, home shopping, and remote education. To enable this vision, TV Anytime will define specifications

• That will enable applications to exploit local persistent storage in consumer electronics platforms • That are network independent with regard to the means for content delivery to consumer electronics equipment, including various delivery mechanisms (e.g., ATSC, DVB, DBS, and others) and the Internet and enhanced TV

• For interoperable and integrated systems, from content creators/providers, through service providers, to the consumers

• That provide the necessary security structures to protect the interests of all parties involved Two important components in the TV Anytime framework are the digital storage and recording and the metadata, because they allow consumers to access the content they want, whenever they want it and how they

FIGURE 21.3.39 Various digital storage models: (a) in-home consumer storage— personal digital recorder (PDR), (b) remote consumer storage—network digital recorder (NDR), (c) PDR + NDR combination.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.106

DIGITAL VIDEO RECORDING SYSTEMS 21.106

VIDEO AND FACSIMILE SYSTEMS

want it (i.e., presented and tailored according to the user preferences/requests). Metadata can be easily stored along with the content to enable searching, selection, and management of the content in a much easier fashion as compared to the current analog VCRs, and also allow a richer interaction with the stored content. Other benefits of local digital storage are that the TV viewers can order a program to be recorded using a single button during a trailer, making available intelligent agents that based on the stored user preferences can record TV programs that they think a viewer might want to watch and consuming a program in a nonlinear rather than linear manner (e.g., a news program). Note also that the digital recording process can be performed within the TV Anytime framework locally, remotely, or in a combined manner (see Fig. 21.3.39). Hence, digital recording of content on local storage and digital broadcasting together with the framework provided by TV Anytime for content referencing and location resolution, metadata and rights management and protection, are providing significant benefits as compared with alternative forms of content delivery such as analog broadcasting, Internet, and broadband networks.

REFERENCES 1. Y. Shiraishi, “History of Home Videotape Recorder Development,” SMPTE J., pp. 1257–1263, December 1985. 2. K. Sadashige, “Transition to Digital Recording: An Emerging Trend Influencing All Analog Signal Recording Applications,” SMPTE J., pp. 1073–1078, November 1987. 3. H. Sugaya, and K. Yokoyama, Chapter 2 from C. D. Mee and E. D. Daniel, Magnetic Recording, Vol. 3, McGraw-Hill, 1988. 4. S. B. Luitjens, “Magnetic Recording Trends: Media Developments and Future (video) Recording Systems,” IEEE Trans. Magnetics, Vol. 26, No. 1, pp. 6–11, January 1990. 5. M. Umemoto, Y. Eto, and T. Fukinuki, “Digital Video Recording,” Proc. IEEE, Vol. 83, No. 7, pp. 1044–1054, July 1995. 6. J. C. Whitaker, and K. Blair Benson, Standard Handbook of Video and Television Engineering, 3rd ed., Chapter 8, McGraw-Hill, 2000. 7. S. Gregory, Introduction to the 4:2:2 Digital Video Tape Recorder, Pentech Press, 1988. 8. R. Brush, “Design Considerations for the D-2 NTSC Composite DVTR,” SMPTE J., pp. 182–193, March 1988. 9. J. van der Meer, “The Full Motion System for CD-I,” IEEE Trans. Consum. Electron., Vol. 38, No. 4, pp. 910–920, November 1992. 10. Coding of Moving Pictures and Associated Audio, Committee Draft of Standard ISO 11172, ISO MPEG 90/176, December 1990. 11. D. LeGall, “MPEG: A Video Compression Standard for Multimedia Applications,” Commun. ACM, Vol. 34, No. 4, pp. 46–58, April 1991. 12. Coding of Moving Pictures and Associated Audio, International Standard, ISO/IEC 13818, November 1994. 13. J. L. Mitchell, W. B. Pennebaker, C. Fogg, and D. LeGall, MPEG Video Compression Standard, Chapman and Hall, 1996. 14. J. Taylor, DVD Demystified: The Guide Book for DVD-ROM and DVD-Video, McGraw-Hill, ISBN 0-07-064841-7, 1998. 15. R. J. Clarke, Transform Coding of Images, Academic Press, 1985. 16. N. S. Jayant, and P. Noll, Digital Coding of Waveforms: Principles and Applications to Speech and Video, Prentice Hall, 1984. 17. M. Rabbani, and P. W. Jones, Digital Image Compression Techniques, SPIE Tutorial Texts, Vol. TT 7, SPIE Opt. Engineering Press, Bellingham, 1991. 18. P. H. N. de With, Data Compression Techniques for Digital Video Recording, Ph.D. Thesis, University of Technology Delft, June 1992. 19. J. Watkinson, The Art of Digital Video, Focal Press, Chapter 6, pp. 226–274, 1990. 20. Outline of Basic Specifications for Consumer-Use Digital VCR, Matsushita, Philips, Sony, Thomson, July 1993. 21. S. M. C. Borgers, W. A. L. Heijnemans, E. de Niet, and P. H. N. de With, “An Experimental Digital VCR with 40 mm Drum and DCT-Based Bit-Rate Reduction,” IEEE Trans. Consum. Electron., Vol. CE-34, No. 3, pp. 597–605, August 1988.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.107

DIGITAL VIDEO RECORDING SYSTEMS DIGITAL VIDEO RECORDING SYSTEMS

21.107

22. N. Doi, H. Hanyu, M. Izumita, and S. Mita, “Adaptive DCT Coding for Home Digital VTR,” IEEE Proc. Global Telecomm. Conf., Hollywood (USA), Vol. 2, pp. 1073–1079, November 1988. 23. T. Kondo, N. Shirota, K. Kanota, Y. Fujimori, J. Yonemitsu, and M. Nagai, “Adaptive Dynamic Range Coding Scheme for Future Consumer Digital VTR,” IERE Proc. 7th Int. Conf. Video, Audio & Data Recording, York (U.K.), Publ. No. 79, pp. 219–226, March 1988. 24. C. Yamamitsu, A. Ide, A. Iketani, T. Juri, S. Kadono, C. Matsumi, K. Matsushita, and H. Mizuki, “An Experimental Study for Home-Use Digital VTR,” IEEE Trans. Consum. Electron., Vol. CE-35, No. 3, pp. 450–457, August 1989. 25. H.-J. Platte, W. Keesen, and D. Uhde, “Matrix Scan Recording, a New Alternative to Helical Scan Recording on Videotape,” IEEE Trans. Consum. Electron., Vol. CE-34, No. 3, pp. 606–611, August 1988. 26. K. Kanota, H. Inoue, A. Uetake, M. Kawaguchi, K. Chiba, and Y. Kubota, “A High Density Recording Technology for Digital VCR’s,” IEEE Trans. Consum. Electron., Vol. 36, No. 3, pp. 540–547, August 1990. 27. M. Kobayashi, H. Ohta, and A. Murata, “Optimization of Azimuth Angle for Some Kinds of Media on Digital VCR’s,” IEEE Trans. Magnetics, Vol. 27, No. 6, pp. 4526–4531, November 1991. 28. Y. Eto, “Signal Processing for Future Home-Use Digital VTR’s,” IEEE J. Sel. Areas Commun., Vol. 10, No. 1, pp. 73–79, January 1992. 29. M. Shimotashiro, M. Tokunaga, K. Hashimoto, S. Ogata, and Y. Kurosawa, “A Study of the Recording and Reproducing System of Digital VCR Using Metal Evaporated Tape,” IEEE Trans. Consum. Electron., Vol. 41, No. 3, pp. 679–686, August 1995. 30. C. Yamamitsu, A. Iketani, J. Ohta, and N. Echigo, “An Experimental Digital VCR for Consumer Use,” IEEE Trans. Magnetics, Vol. 31, No. 2, pp. 1037–1043, March 1995. 31. P. H. N. de With, and A. M. A. Rijckaert, “Design Considerations of the Video Compression System of the New DV Camcorder Standard,” IEEE Trans. Consum. Electron., Vol 43, No. 4, pp. 1160–1179, November 1997. 32. R. W. J. J. Saeijs, P. H. N. de With, A. M. A. Rijckaert, and C. Wong, “An Experimental Digital Consumer Recorder for MPEG-coded Video Signals,” IEEE Trans. Consum. Electron., Vol. 41, No. 3, pp. 651–661, August 1995. 33. J. A. H. Kahlman, and K. A. S. Immink, “Channel Code with Embedded Pilot Tracking Tones for DVCR,” IEEE Trans. Consum. Electron, Vol. 41, No. 1, pp. 180–185, February 1995. 34. P. H. N. de With, A. M. A. Rijckaert, H.-W. Keessen, J. Kaaden, and C. Opelt, “An Experimental Digital Consumer HDTV Recorder Using MC-DCT Video Compression,” IEEE Trans. Consum. Electron., Vol. 39, No. 4, pp. 711–722, November 1993. 35. G. C. P. Lokhoff, “Precision Adaptive Subband Coding (PASC) for the Digital Compact Cassette (DCC),” IEEE Trans. Consum. Electron., Vol. 38, No. 4, pp. 784–789, November 1992. 36. S. G. Stan and H. Spruit, “DVD + R—A write-once optical recording system for video and data applications,” Int. Conf. Consum. Electron., Los Angeles, 2002, Digest of Techn. Papers, pp. 256–257, June 2002. 37. S. F. Chang, T. Sikora, and A. Puri, “Overview of the MPEG-7 standard,” IEEE Trans. Circuits Syst. Video Technol., Vol. 11, No. 6, pp. 688–695, June 2001. 38. B. S. Manjunath, P. Salembier, and T. Sikora, Introduction to MPEG-7: Multimedia Content Description Interface, Wiley, April 2002. 39. D-VHS: A First Look at a New Format, http://www.thedigitalbits.com/articles/dvhs. 40. K. A. S. Immink, Codes for Mass Data Storage Systems, Shannon Found. Publishers, ISBN 90-74249-23-X, Venlo, The Netherlands, November 1999. 41. F. Sijstermans, and J. van der Meer, “CD-i Full-Motion Video on a Parallel Computer,” Commun. ACM. Vol. 34, No. 4, pp. 82–91, April 1991. 42. http://www.tv-anytime.org 43. P. H. N. de With, M. Breeuwer, and P. A. M. van Grinsven, “Data Compression Systems for Home-Use Digital Video Recording,” IEEE J. Sel. Areas Commun., Vol. 10, No. 1, pp. 97–121, January 1992.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/29/04

2:12 PM

Page 21.108

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 21.4

TELEVISION BROADCAST RECEIVERS Lee H. Hoke, Jr.

GENERAL CONSIDERATIONS Television receivers are designed to receive signals in two VHF bands and one UHF band, and optionally a complement of the cable-TV channels, according to the United States and Canadian standards. The lower VHF band (channels 2 to 6) extends from 54 to 88 MHz in 6-MHz channels, with the exception of a gap between 72 and 76 MHz. The higher VHF band (channels 7 to 13) extends from 174 to 216 MHz in 6-MHz channels. The UHF channels are spaced 254 MHz above the highest VHF channels, comprising 56 6-MHz channels extending from 470 to 806 MHz. Cable channels extend continuously from 54 to approximately 1000 MHz, also with 6-MHz spacing. Figure 21.4.11 shows the CATV channelization plan adopted jointly by the NCTA and EIA in 1983 and revised in 1994. The television tuner is thus required to cover a frequency range of more than 15:1. TV tuners of past generations use separate units to cover the UHF and VHF bands. Current design practice includes the circuitry to receive all bands in a single unit. The signal coverage of TV transmitters is generally limited to line-of-sight propagation, with coverage extending from 30 to 100 mi depending on antenna height and radiated power. The coverage area is divided into two classes of service, depending on the signal level. The service area labeled class A is intended to provide essentially noise-free service and specifies the signal levels shown.

Channels

Peak signal level, mV/m

Peak open-circuit antenna voltage, mV

Class A service 2–6 7–13 14–69

2500 3500 5000

3500 1800 800

Class B service 2–6 7–13 14–69

225 630 1600

300 300 250

For the limiting area of fringe service the signal levels are defined as shown for class B service. The typical level of the sound signal is from 3 to 20 dB below the picture level, due to radiated sound power and antenna gain. The block diagram of a monochrome TV receiver for analog signal reception is shown in Fig. 21.4.2. 21.108 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.109

FIGURE 21.4.1 Frequency spectrum and channel designation for broadcast and cable television. (From EIA IS-6. Revised Aug./1992)

TELEVISION BROADCAST RECEIVERS

21.109 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.110

TELEVISION BROADCAST RECEIVERS 21.110

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.4.2 Fundamental block diagram, monochrome receiver. (From Benson & Whitaker, Ref. 2)

RECEIVERS FOR DIGITAL TELEVISION TRANSMISSIONS From Analog to Digital––the Contrast When comparing a block diagram of the typical receiver for digital transmission to that of a current analog receiver, the first impression is the difference in complexity. A digital receiver (Fig. 21.4.3) contains more functional blocks, several of which differ greatly and have a high degree of complexity while still containing others that are similar to those of an analog set. The digital set contains more silicon ICs, as both memory and signal processor devices, many of which are custom designs at this date, a factor that may increase the set cost by $200 to $300 compared to a baseline NTSC set having a similar display size. One basic advantage of the digital format is the increase in information that can be transmitted in a standard channel (6 MHz for the United States). This leads to packing three to four programs into a channel, or in the case of the U.S. Advanced Television System Committee (ATSC) Standard, a single HDTV signal or a group of up to four standard definition (SD) TV programs. Complexity here involves comparing the nearly 150,000 picture elements in an NTSC picture display to the 2 million picture elements of an HDTV display, an increase of 13 to 1. This equates to an RGB studio signal having 3 × 1080 active lines × 1920 samples per line × 8 bits per sample × 30 pictures per second, which equals approximately 1.5 Gb/s.3 By using video compression, especially MPEG-2, this can be reduced to a more reasonable value of 20 Mb/s for transmission as a TV signal. Many private concerns (broadcast and cable) within the United States as well as other countries are not interested in using digital transmission as a carrier for HDTV pictures. Instead their involvement with digital transmission is for satellite direct-to-home broadcast and for program coverage throughout the country, including a more robust signal throughout all areas for mobile and personal portable use. These diverse requirements have led to differing, optimized, digital RF modulation schemes. For example, the ATSC system uses a vestigial sideband system (VSB), quadrature phase-shift keying (QPSK) has been selected for satellite-to-home, quadrature-amplitude modulation (QAM) is the method chosen for digital cable (D-CATV), and for terrestrial area coverage in several countries outside the United States the orthogonal frequency division multiplex (OFDM) multicarrier modulation scheme has been selected. Each of these leads to differences in the receiver configuration as will be described later. With current analog TV signal transmission, as the distance from the transmitter to the receiver increases, the picture becomes progressively noisier (snowy), until it is judged as “unwatchable,” although the sound might still be acceptable. With the digital signals and the high degree of data compression, however, the picture remains crystal clear and noisefree until a certain distance from the transmitter (deteriorating signal-tonoise ratio by 1 to 2 dB) at which point the picture suddenly breaks up and is completely lost (brick wall effect,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.111

FIGURE 21.4.3 Block diagram of typical receiver for digital TV transmissions.

TELEVISION BROADCAST RECEIVERS

21.111 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.112

FIGURE 21.4.4 Basic elements of a digital TV system.

TELEVISION BROADCAST RECEIVERS

21.112 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.113

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.113

sometimes referred to as the waterfall effect) as compared to the gradual deterioration of an analog TV signal. A similar phenomenon can be observed in a satellite-to-home system. A storm cloud or leaves on trees can reduce the signal level just below the threshold at which point the received picture is lost completely, again a change of only 1 to 2 dB. In an effort to overcome the deficiencies of the transmission channel such as ghosts, noise, fades, and co-channel interference, the digital signal is encoded with forward error correction (FEC), often called channel coding, at the transmitter end. This necessitates complementary decoding at the receiving end, as shown in Fig. 21.4.4. The various types of encoding used with current digital TV systems are ReedSolomon, Viterbi, trellis, interleaving, and convolutional, or a combination of these (concatenated coding system), depending on the robustness of the channel (cable, satellite or terrestrial).4 For a cable system, this additional signal processing ensures high-quality service throughout the entire system with no signal deterioration at the extremities. The final difference to be discussed is the mapping of the demodulated video signal onto the display means (CRT, LCD, and so forth). Typically, this will be one-to-one, that is, the display system will be designed to match the specification (scan lines and pixels or samples per line) of the decoded video. This is true except for the ATSC system where five distinctively different video formats have been allowed for transmission. In the receiver, these must be converted to the natural parameters of the display device (display field rate, interlaced or progressive display, lines per field, and samples per line). Details will be covered in a later section. Set-top boxes and high-definition (HD) or digital-ready TV sets will be the mechanism that brings digital technology to the consumer for the next several years as the transition from analog to digital takes place. Currently within the United States three of the modulation techniques to be discussed later have become “standards” in a particular application, i.e., VSB for terrestrial, QAM for cable, and QPSK for direct-to-home satellite. Although the ability to design a TV set that can accommodate all three exists, the cost to the consumer would be prohibitive. To include a range of sets in the retail product line to handle each individual application might be cost effective, but would be prohibitive to the retailer as well as very confusing to the customer. In order to achieve flexibility needed by customers who have changed their TV delivery service frequently over the years, a set-top box that is unique to the service has become the answer. Not only does this solve the input signal demodulation problem in an economic way, the output of each box connects to a standard, existing NTSC receiver, thereby allowing the customer to use his old set with the new service. Typically, for cable and satellite, these boxes are rented from the local cable provider or the retail outlet that sells and installs the satellite system hardware. Recent set-top boxes from at least two manufacturers have included the dual function of being a terrestrial HDTV decoder as well as a satellite decoder for the DirecTV system. This eases the “Which box should I buy?” decision for the consumer. Signals available from these boxes include composite video (CVBS), S-video (Y/C) for standard TV receivers, and high-resolution component video (Y, Pb, Pr) and RGB to drive HDTV-ready monitor receivers. Although new fully integrated digital HDTV sets are available on the market, their price is still considerably above the standard market level for a TV set. The most popular designs currently available are the “HD Ready TVs,” which have upgraded display capability consisting of wider bandwidth video, up to 30 MHz, progressive scan, a higher-resolution CRT or projection display mechanism, and input connectors and wide-band circuitry which accepts progressive (2fH) video signals or interlaced signals (1fH) and convert them to progressive. A block diagram of an HD-ready receiver design is shown in Fig. 21.4.5.

QAM DIGITAL MODULATION QAM digital modulation has been found to be advantageous for cable systems because of its simplicity, robustness, and its ability to deliver high-quality video signals in a 6-MHz channel over a hybrid fiber/coaxial network to subscriber homes where the signals are received via set-top boxes. The QAM signal can be considered to be a double-sideband suppressed carrier amplitude-modulated scheme. The input data bit stream is split into two independent data streams with one modulating the in-phase carrier while the other modulates the quadrature carrier component. Higher-order systems (M), e.g., M = 16, 64,…, contain additional sets of carriers that are evenly phase spaced from the others.5 Figure 21.4.6 shows a block diagram of a QAM receiver that can decode either the usual 64-QAM (20 Mbit/s) or the higher information density 256-QAM signal (40 Mbit/s).6

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.114

FIGURE 21.4.5 HD-ready TV receiver block diagram.

TELEVISION BROADCAST RECEIVERS

21.114 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.115

FIGURE 21.4.6 Block diagram of a QAM receiver. (From Ref. 6)

TELEVISION BROADCAST RECEIVERS

21.115 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.116

FIGURE 21.4.7 Receivers for QPSK transmission: (a) demodulation after A/D conversion; (b) demodulation prior to A/D conversion. (From Refs. 9 and 10)

TELEVISION BROADCAST RECEIVERS

21.116 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.117

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.117

The tuner is of conventional TV tuner design with somewhat tighter specs for local oscillator phase noise and hum modulation. An AGC amplifier maintains a constant signal level to the second mixer that transposes the IF signal to baseband where it is low-pass filtered to remove all frequencies that would cause aliasing in the analog-to-digital A/D converter. The signal is then demodulated into in-phase I and quadrature Q components and again low-pass filtered. Because free-running oscillators were used in earlier stages, a carrier phase rotation corrector stage is needed to realign the amplitudes of the I and Q components. This stage is part of a feedback loop that uses equalized I and Q as inputs. An adaptive equalizer consisting of a feed-forward section and a decision-feedback section removes amplitude and phase distortion caused by reflections and imperfections in the previous filters and the transmitter upconverter. These equalizer stages are made up of sections of programmable-tap FIR filter sections. Following equalization and symbol quantization, the forward error correction takes place. The data are then fed to MPEG-2 and AC-3 source decoders (not shown) for final processing. Two somewhat modified implementations of QAM demodulators for cable TV application are given in recent literature.7,8 In the first, the QAM waveform is sampled at a low IF instead of baseband. This has led to simplified hardware implementation, whereby QAM decoding, including equalization, is achieved on one IC chip. The second design, also intended for use in a set-top cable box, describes two ICs; one performing as downconverter, containing the IF amplifier, local oscillator, quadrature demodulator, and AGC. The second IC contains antialias filters, A/D converters, digital I/Q demodulators, frequency and time domain equalizers, symbol-to-byte mapping, and Reed-Solomon forward error correction.

QPSK QUADRATURE-PHASE-SHIFT KEYING QPSK quadrature-phase-shift keying is the accepted digital modulation for satellite-to-home application. This technique is used in direct broadcast satellite (DBS) systems in the United States (DirecTV and others), the European direct video broadcast (DVB) Eureka system, and the Japanese 8-PSK system. M-PSK is similar to M-QAM in that multiple phases of the carrier (M = 4 for QPSK, 8 for 8-PSK) are modulated by split bit streams. The modulation, however, is only phase, thereby leading to a constant amplitude RF signal. At the receiver, the process is similar to that for QAM, except that the decision leading to reconstruction of the transmitted bit stream is made only on phase information.5 A receiver block diagram and circuitry, therefore, is very similar to that shown earlier for the QAM case. Circuits that convert the signal to a baseband digital signal before demodulating the QPSK signal as well as the alternate process of demodulating QPSK as an analog signal at IF, then doing the remaining signal processing, have been built.9,10 In the former, extensive analog circuitry is used after the tuner to accomplish antialias band filtering, AGC and A/D conversion. A block diagram is shown in Fig. 21.4.7a. The block diagram of the second approach, Fig. 21.4.7b, is similar to that shown for a QAM receiver in Ref. 7, except that the QPSK demodulator consisting of a quadrature demodulator with 4 fif local oscillator is located in the first IC with outputs to the A/D converters being baseband analog I and Q signals. In both cases, the QPSK demodulation is followed by channel decoding, which includes filtering, deinterleaving and FEC, usually Reed-Solomon type. Synchronizing either to the carrier or to the recovered I and Q signals, Fig. 21.4.8, is also an important feedback loop that is contained in most receiver designs of this type.11

ORTHOGONAL FREQUENCY DIVISION MULTIPLEX Orthogonal frequency division multiplex (OFDM) can be thought of as a multiple carrier version of QAM in which the individual carriers are equally spaced in frequency across the channel bandwidth. The input data stream is split into parallel blocks of symbols, each of which modulates a separate carrier. The carriers are then summed and transmitted. Owing the orthogonality of the carriers, the sampled output is effectively the inverse discrete Fourier transform of the input sequence. This parallel transmission or multiple carrier modulation (MCM) technique avoids several problems such as fading and impulse noise, which affect single carrier modulation (SCM) systems. At the receiver end, the signal is down-converted and sampled at the appropriate frequency, locked to the transmitted signal, then passed to the discrete Fourier transform demodulator where the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.118

TELEVISION BROADCAST RECEIVERS 21.118

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.4.8 Demodulator concept having both carrier and clock recovery. (From Ref. 11)

symbols are recovered.5 Figure 21.4.9 shows a block diagram of the classical OFDM system for television, including transmitter and receiver. At the transmitter the processes that occur prior to the IFFT are done in the frequency domain, while those after the IFFT are in the time domain. At the receiver, the process is complementary with those processes ahead of the FFT being in the time domain and those after the FFT being in the frequency domain.12 OFDM modulation has been selected for digital terrestrial TV broadcast (dTTb) in Europe not only for fixed location reception but also for mobile and portable applications. In the United States, some factions have vigorously pushed for OFDM as opposed to the Grand Alliance vestigial-sideband system, which will be covered later in this section. An OFDM receiver that follows the block diagram shown previously is shown in Fig. 21.4.10. This design features a single IC chip that contains both analog and digital circuits for implementing much of the OFDM demodulation. The analog part contains an antialiasing filter, AGC stage, and A/D converter, which outputs 8 bit resolution to the digital part of the IC. The digital part of the chip contains a signal detection unit that aids in start-up, channel change and provides AGC; an I/Q mixer for down-converting the signal to baseband; and an I/Q resampler where carrier and sampling clock frequency adjustments are made. The FFT unit can perform either 2k or 8k demodulation. The parallel symbols are then converted to serial bit stream and sent through the Viterbi plus Reed-Solomon error decoding. The output of the chip then consists of a bit stream that is sent to the MPEG-2 decoder for source decoding.

VESTIGIAL SIDE-BAND MODULATION Eight-level vestigial side-band modulation (8-VSB) was proposed by the Grand Alliance in 1993. A testing phase was completed and the FCC accepted the system in 1995 as the standard for terrestrial digital television including both high and standard definition for the United States. The tests showed that the proposed system was robust to not only in-channel noise, ghosts, or reflections, but also the coexistent NTSC stations which would be broadcasting on the same and adjacent channels during the years of transition from analog to digital. In extensive field tests made in the Charlotte, N.C. area in 1994, the VSB system using 12 dB lower radiated signal than

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.119

FIGURE 21.4.9 OFDM system diagram. (From Ref. 12)

TELEVISION BROADCAST RECEIVERS

21.119 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.120

FIGURE 21.4.10 DVB-T OFDM receiver block diagram. (Redrawn from Ref. 13)

TELEVISION BROADCAST RECEIVERS

21.120 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.121

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.121

FIGURE 21.4.11 VSB and NTSC RF spectra. (From Ref. 14)

the NTSC broadcast outperformed NTSC by a significant margin.14 The actual calculated and measured margin at which picture or sound deterioration takes place in the presence of white noise favors 8-VSB over NTSC by 19 dB. The spectrum of VSB and NTSC are similar in that both completely fill the 6-MHz channel and that both use a carrier that is located near the lower channel edge. In VSB, however, the spectrum of the modulation within the channel is nearly flat and uniform, as compared to NTSC’s chroma subcarrier and sidebands and the FM modulated sound carrier at the upper end of the channel. A comparison is shown in Fig. 21.4.11. Much testing has also been done on cable systems of a 16-VSB modulation system that has a throughout of nearly 40 Mbit/s, double that of the 8-VSB system.15 The discussions between the Advanced Television Systems Committee and the cable industry to set one standard for HDTV transmission appear to be nearing the consensus stage. To date, most TV manufacturers have developed and marketed digital TV sets having capability to decode the full HDTV 8-VSB standard. The blueprint for the system and prototype hardware has been reported in numerous technical publications.14–17 A simple block diagram of the receiver is shown in Fig. 21.4.12. A similarity can be seen between these major blocks and those shown for the digital TV receivers described earlier, especially the OFDM receiver. Trellis decoding by Viterbi means and Reed-Solomon FEC is a dominant part of each of the receivers. Each has some means to equalize the channel to correct for ghosts and bursts. The VSB system, however, uses only the I component for data recovery and therefore needs only a single A/D converter and channel equalizer instead of the two matched units used in other systems. While the other systems synchronize by using the demodulated data symbols and therefore need quadrature correction circuitry, the VSB system has three transmitted mechanisms for synchronizing the receiver to transmitter. The first is the pilot carrier located 0.31 MHz in from the lower band edge. A frequency/phase-locked loop (FPLL) in the receiver, Fig. 21.4.13, establishes synchronization to this carrier. A noncoherent AGC feedback adjusts the gain of the IF amplifier and tuner to bring the signal into range of the A/D converter. Repetitive data segment syncs consisting of four symbols per segment provide the second synchronizing means. These sync pulses are detected from among the synchronously detected random data by a narrow bandwidth filter. A feedback loop, Fig. 21.4.14a, then creates a properly phased 10.76 MHz symbol clock along with a coherent AGC control voltage that locks the proper demodulated signal level to the A/D converter. A third mechanism is to compare the received data field segment with a set of ideal segments (field 1 and field 2) contained in a frame sync recovery circuit within the receiver. This circuit is shown in Fig. 21.4.14b. Following this, a circuit determines if there is a co-channel NTSC signal. If so, the NTSC interference rejection comb filter, a one-tap linear feed-forward filter that has nulls nearly corresponding to the frequencies of the NTSC video carrier, chroma subcarrier, and aural carrier, is switched in Fig. 21.4.15. This filter degrades white noise performance by 3 dB and will not be included in receivers after all NTSC transmitters are silent.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.122

FIGURE 21.4.12 VSB Receiver. (From Ref. 15)

TELEVISION BROADCAST RECEIVERS

21.122 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.123

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.123

FIGURE 21.4.13 Tuner––IF frequency/phase lock loop. (From Ref. 15)

Following the NTSC interference rejection filter is a channel equalizer that compensates for linear channel distortions such as tilt and ghosts. The prototype system used a 64 tap feed-forward transversal filter followed by a 192 tap decision feedback filter. A least-mean-square algorithm was used to compare the transmitted training signal, pseudo-noise sequences which are a part of the data filed sync, to a stored image of the training signal with the error being feed-back to set the tap coefficients. Once equalization is achieved at this level, the circuit can lock on to either data symbols throughout the frame or the data itself for further fine-tuning of the ghost canceling. Airplane flutter is usually too rapid for a full tap evaluation and is therefore handled by the latter technique. A block diagram of the equalizer is shown in Fig. 21.4.16. The next block in the receiver chain is a phase tracking loop, Fig. 21.4.17, which tracks out phase noise that had not been removed by the Tuner-IF PLL operating on the pilot carrier. This circuit consists of a digital filter that constructs a Q signal from the existing I signal. These signals are then used to control a complex multiplier or phase derotator. It has been reported that the 8-VSB receiver system consisting of the front-end FPLL and the phase-tracking circuit can compensate for phase errors up to –77 dBc/Hz at a 20 kHz offset from the carrier. The next block provides deinterleaving of the 12 symbol intersegment code interleaving which was applied in the transmitter. At the same time, trellis decoding takes place in a structure shown in Fig. 21.4.18. Here, one trellis decoder is provided for each branch although in more recent designs, a single trellis decoder is used in a time-multiplexed fashion to reduce IC complexity.18,19 Following trellis decoding, Reed-Solomon decoding takes place. At this point, channel decoding is complete and the data are ready to be split into the appropriate audio and video packets and sent to the source-decoding circuitry. The receiver-to-transmitter lock-up and signal-decoding process takes place in the following sequence:16 1. 2. 3. 4. 5. 6. 7. 8. 9.

Tuner first local oscillator synthesizer acquisition Noncoherent AGC reduces unlocked signal to within A/D range Carrier acquisition (FPLL) Data segment sync and clock acquisition Coherent AGC of signal (IF and RF gains properly set) Data field sync acquisition NTSC rejection filter insertion decision made Equalizer completes tap adjustment algorithm Trellis and RS data decoding begins

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.124

TELEVISION BROADCAST RECEIVERS 21.124

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.4.14 (a) Segment sync and symbol clock recovery; (b) data frame sync recovery. (From Ref. 14)

SOURCE DECODING Source decoding consists of decoding the AC-3 audio and the MPEG-2 video which had been encoded at the transmitter using the main profile at high level (MP@HL) specification.3,16,20 A block diagram of an MPEG-2 decoder is shown in Fig. 21.4.19. Here video frames are created from the compressed packet data and stored

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.125

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.125

FIGURE 21.4.15 (a) NTSC interference rejection filter; (b) comb filter spectrum. (From Ref. 14 )

in frame memory. The decoded video is then read out and passed on to the display circuitry of the receiver in whatever format is required by that display. The final piece of the TV receiver system is the display. The drive requirements of the display do not necessarily match the format of the decoded video signal. In fact, the ATSC Standard permits the transmission of any one of numerous video formats (Table 21.4.1).16 The requirement for this section of the receiver, therefore, is to scan convert or format convert the decoded video from the MPEG-2 decoder into the form needed by the display. Typically, this is accomplished by loading the video into RAM-type memory (frame buffer) and clocking the video out at a rate and in a format needed for the display (pixels per line, lines per frame/field, progressive or interlaced fields).21 In the case where there is not a 1:1 match between the total number of video pixels in a frame and the display pixels, e.g., displaying an HDTV

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.126

FIGURE 21.4.16 VSB receiver equalizer. (From Ref. 3)

TELEVISION BROADCAST RECEIVERS

21.126 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.127

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.127

FIGURE 21.4.17 Phase-tracking loop block diagram. (From Ref. 15)

signal on a standard NTSC-type SDTV display, an intermediate step of data interpolation, frame storage, and smoothing is needed (Fig. 21.4.20). Often, noise reduction and motion compensation is included in this step. In some newer designs a considerable saving in memory requirements can be achieved when the downconversion of the video information is accomplished within the MPEG-2 decoding process22,23 (Fig. 21.4.21). The video signals in the display section are usually in the Y/C format, then converted to Y, Pb, Pr format and finally to analog R, G, B format, especially if driving a direct-view CRT or CRT projection display. The parameters of the final video signals for several of the more popular ATSC display formats of Table 21.4.1 is shown in Table 21.4.2. Figure 21.4.22 gives a comparison of bandwidth requirements for various values of horizontal picture resolution. It is interesting to note that although the video channel bandwidth requirements are identical for the

FIGURE 21.4.18 Intersymbol code de-interleaver and trellis decoders. (From Ref. 15)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.128

FIGURE 21.4.19 MPEG-2 video decoder block diagram. (From Ref. 3)

TELEVISION BROADCAST RECEIVERS

21.128 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.129

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

TABLE 21.4.1 ATSC Digital Television Standard Video Formats (From Ref. 16) Vertical lines

Pixels

Aspect ratio

Picture rate

1080 720 480 480

1920 1280 704 640

16:9 16:9 16:9 and 4:3 4:3

60I, 30P, 24P 60P, 30P, 24P 60P, 60I, 30P, 24P 60P, 60I, 30P, 24P

FIGURE 21.4.20 The traditional down-conversion method in the pixel domain. (From Ref. 22)

FIGURE 21.4.21 HD to SD low cost decoder. (From Ref. 23)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

21.129

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.130

TELEVISION BROADCAST RECEIVERS 21.130

VIDEO AND FACSIMILE SYSTEMS

TABLE 21.4.2 Picture Parameters, Video Bandwidth Requirements and Scan Frequencies for Several Display Systems

System

Type

HDTV

1080i

HDTV

720p

NTSC (16:9) (SDTV)

525i

VGA (16:9)†

480p

Active picture elements

Total elements

H pixels V lines

1920 1080

2200 1125

H pixels V lines H pixels V lines

1280 720 704 480

1650 750 858 525

H pixels V lines

704 480

858 525

Rate per second

Video bandwidth* (MHz)

Horizontal frequency (kHz)

30 frames (60 fields)

37.125

33.57

60 frames

37.125

45

30 frames (60 fields)

13.5

15.75

60 frames

27

31.5

*Similar †Video

to ITU-R BT.601-4 which uses only 704 of 720 pixels and only 480 of 483 lines. bandwidth values are for the Nyquist criterion.

1080i and 720p systems, the picture resolution of the two systems differs by a factor of 1.5. Discussion of how much resolution is really necessary to sell HDTV to the public has been going on for the past 10 years, and may continue. Figure 21.4.23 shows the results of a study made on several integrated HD and HD-Ready TV sets now available in the marketplace. These sets were driven at the video inputs with a 1080i signal and each

FIGURE 21.4.22 High-definition picture resolution vs. video bandwidth.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.131

FIGURE 21.4.23 Picture resolution performance of several HD and HD-Ready TV sets.

TELEVISION BROADCAST RECEIVERS

21.131 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.132

TELEVISION BROADCAST RECEIVERS 21.132

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.4.24 Flow diagram of the AC-3 decoding process. (From Ref. 25)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.133

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.133

FIGURE 21.4.25 AC-3 audio decoder. (From Ref. 25)

either displayed full resolution or down-converted to the native scan/video capability of the receiver. The major diagonal line on the chart represents the 1080i locus with full HD resolution at one end and NTSC at the other. In the case of several receivers, severe aliasing of the multiburst pattern caused the observed picture to be judged to have a lower frequency value than the design intent. Second generation designs will most likely correct these deficiencies. The audio that had been encoded at the transmitter using the AC-3 specifications is decoded into one to six audio channels, all at line level (left, center, right; left surround, right surround, and low-frequency enhancement sub woofer25). Since the low-frequency enhancement channel has an upper bandwidth limit of 120 Hz, it is usually not counted as a complete channel, but only as 0.1 channel, leading to the designation of AC-3 as having 5.1 channels. It is not necessary for a receiver to decode all the available channels. A monophonic receiver will need to provide only one output audio signal. In this case, the receiver’s decoder will down-mix the six channels into one. A popular decoder design will provide a down-mix of six into two audio outputs for use in lower cost stereo TV sets. The AC-3 bit stream is composed of frames each containing sync, encoding/decoding information, and six blocks of audio data. These frames are decoded using the process shown in Figs. 21.4.24 and 21.4.25. Each frame can be decoded as a series of nested loops in which each channel can be handled independently. This process can be accomplished in an audio DSP IC. One such implementation for the full 5.1 channel output uses 6.6K RAM, 5.4K ROM, and 27.3 MIPS. A lower-cost 2 channel implementation requires the same amount of ROM and nearly the same MIPS, but only 3.1K RAM.24 PCM to analog (D/A) converters, amplifiers, level and balance controls, and speakers complete the audio system.

DISPLAYS Liquid-Crystal Displays (LCDs) Use of both monochrome and color LCDs has become popular, especially in small personal portable television receivers. The operation of these devices is not limited by the high-voltage requirements of conventional CRTs. Instead, the picture raster is constructed of a rectangular MOS switching matrix of from 240 to 600 horizontal elements and from 200 to 400 vertical elements.26 The gates of all the thin-film transistors (TFTs) in a given

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.134

TELEVISION BROADCAST RECEIVERS 21.134

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.4.26 LCD television picture display. (From Benson & Whitaker, Ref. 2)

horizontal row are connected to a common bus (Fig. 21.4.26). Likewise the drains of all transistors in a vertical column are connected to a common bus. Vertical scan (row addressing) is produced by sequentially driving the gate buses from the shift register. Horizontal scan, which contains the video information (column addressing), is somewhat more difficult because of the stray capacitance and cross-under resistance associated with drain bus. A given line of video is broken into the same number of pieces as there are pixels in the horizontal row, and stored in the sample-and-hold (S/H) stages, which all drive their respective drain bus lines simultaneously, thus creating a line sequential display. The information on a drain is, therefore, changed only once for each horizontal period (63.5 ms). A color LCD contains a repeating sequence of red, green, and blue filters covering adjacent pixels of a horizontal row.27 The sequence is offset by one pixel in adjacent rows. The video and chroma signals are decoded and matrixed in the conventional manner. The R-G-B signals are then clocked into the line S/H stages in the appropriate sequence.

LARGE-SCREEN PROJECTION SYSTEMS The display for picture sizes of up to 36 in. diagonal usually consists of a direct-view CRT. For pictures above 36 in., a relatively new technology has emerged and a number of projection technologies have become popular for domestic use. Plasma display panel (PDP) systems are a relatively new type of direct view display. Typical sizes have been 40 in. with 4:3 length-to-height ratio, 42 and 60 in. diagonal with a 16:9 length-to-height ratio. The major advantage to a PDP is that it has a depth of only a few inches. This product has been touted as the “picture on the wall.” The structure consists of two pieces of glass separated by an insulating structure in the form of small pockets, cells, or stripes. Each cell is filled with an ionizing gas such as neon and xenon. On the facing sides of the glass plates are metal electrodes, vertical (column) on one and horizontal (row) on the other (Fig. 21.4.27).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.135

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.135

FIGURE 24.4.27 Cross-section of one pixel of an AC plasma display panel. (From Ref. 28)

When a voltage of several hundred volts is applied to a given row and column electrodes, the gas in the appropriate cell ionizes giving off ultraviolet light, thus exciting the color phosphor deposited on the glass plate. Since a cell is either on or off, pulse modulation is used to obtain a shade of gray or a desaturated color. This is accomplished in the driving circuitry by slicing each video field into 8 to 10 subfields and then driving all cells at the subfield rate. The 60-in. unit possesses a resolution of 1280 × 720 (HDTV quality) and has a 500to -1 contrast ratio in a dark room. The 42-in. units have 852 × 480 or 1024 × 1024 pixel counts. Light output over 500 cd/m2 has been measured. Although this technology has been shown to produce bright, outstanding pictures, the consumer price is still somewhat higher than any of the other display systems at this time. CRT projection systems consisting of three cathode-ray tubes, with a typical raster diagonal of 3 to 5 in., produce three rasters in red, green, and blue, being driven by the respective R, G, and B drive signals. These images are projected through three wide-aperture lenses to either a highly reflective screen having a diagonal dimension of 50 in. to 8 or 10 ft (front projection) or to the back of a diffused translucent screen having diagonal dimension of 40 to 70 in. (rear projection) (Fig. 21.4.28). By careful adjustment of the deflection and orientation of the CRTs and lenses, the rasters are brought into precise registry and geometric congruence.29,30 Various surfaces (typically two or four) of the rear projection screen are impressed with patterns of fine-pitch grooves that form lens elements to “focus” or control the direction of the light as it leaves the screen. Medium screen gains (three to six) are preferred and can be designed for more uniform image brightness over a wider viewing angle. Currently, brightness levels of 600 cd/m2 can be achieved with rear projection systems having a picture size of 35 to 50 in. measured diagonally. Since the projection system has no shadow mask in its electrooptical path, the system can achieve better resolution than a conventional large-screen direct-view color CRT. At this time, CRT projection is the preferred display system for large screen HDTV.

FIGURE 21.4.28 Mechanical arrangement of large-screen rear-projection receiver.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.136

TELEVISION BROADCAST RECEIVERS 21.136

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.4.29 Typical LCD projection system.

FIGURE 21.4.30 Optical system using a digital micromirror device. (From Ref. 33)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.137

TELEVISION BROADCAST RECEIVERS TELEVISION BROADCAST RECEIVERS

21.137

LCD projection systems use three LCD panels each measuring 1 to 2 in. diagonal. A single high-intensity light bulb illuminates a series of color-selective dichroic mirrors that split the light into three paths. Each light path passes through its respective LCD panel (red, green, and blue). The three light paths are then combined into one by another series of dichroic mirrors and passed through a lens that projects the image onto either a front or a rear screen (Fig. 21.4.29). Light output of the unit can be improved by adding a collimating element consisting of several microlenses just in front of the illuminating lamp.31 As in the case of CRT projection, the precise alignment of LCD panels and mirrors is essential to register the three color images. Another novel design that has not yet been universally adopted uses only two LC panels, one black and white (B/W) and one three-color unit. This simplifies the mechanical design, uses fewer components, and has simpler convergence alignment. The B/W panel supplies the brightness to make up for the low transparency of the color panel. Video signals to the two panels are matrixed in a unique manner to yield correct color performance.32 LCD projectors currently exist in home theater size (front projection), rear projection self-contained largescreen TV sets, and small portable units having weight of 5 to 10 lb and light output of 500 lm to greater than 1000 lm for use in classroom. Digital micromirror device (DMDTM) display systems also called digital light processing (DLPTM) use a semiconductor IC that contains an array of small mirror elements mounted on its surface, one for each pixel. When an electrical charge is applied to the substrate under a mirror, it will tilt by +10° (ON) and −10° (OFF). In the +10° configuration, the light from a high-intensity light is reflected through a lens and projected onto the viewing surface. In order to obtain shades of gray, the electrical charge to the mirrors is pulse modulated. Two configurations of this basic architecture have been developed. In the first, three DMD ICs, one for each color, are used. The light splitting, combining, and convergence of the three images is similar to that described for the LCD projector. The second configuration, now becoming more popular, uses a small six-segmented color wheel in the light path (Fig. 21.4.30).33 The video signal sent to the DMD is that of a field sequential format in which the DMD is activated to the red picture image when the light is passing through the red portion of the color filter wheel; likewise for blue and green. Currently DMDs having picture definition of 1280 × 720 pixels are being used in HDTV applications.34,35 Because of the light weight of the mechanism, DLP projectors are also becoming popular as portable units for classroom and traveling use. (DMD and DLP are trademarks of Texas Instruments.)

REFERENCES 1. Farmer, J. The Joint EIA/NCTA Band Plan for Cable Television, IEEE Trans. Consum. Electr., August 1994, pp. 503–513. 2. Benson, K. B., and J. Whitaker (eds.) “Television Engineering Handbook,” McGraw-Hill, 1992. 3. Hopkins, R. Chapter 13, HDTV Broadcasting and Reception, “Digital Consumer Electronics Handbook,” McGraw-Hill, 1997, p. 13.7; Also Digital Terrestrial HDTV for North America: The Grand Alliance HDTV System, IEEE Trans. Consum. Electr., August 1994, pp. 185–198. 4. Ghosh, M. Error Correction Schemes for Digital Television Broadcasting, IEEE Trans. Consum. Electr., August 1995, pp. 400–404. 5. Shi, Q. Chapter 5, Digital Modulation Techniques, “Digital Consumer Electronics Handbook,” McGraw-Hill, 1997, pp. 5.1–5.79. 6. Bryan, D. QAM for Terrestrial and Cable Transmission, IEEE Trans. Consum. Electr., August 1995, pp. 383–391. 7. Lane, F., et al. A Single Chip Demodulator for 64/256 QAM, IEEE Trans. Consum. Electr., November 1996, pp. 1003–1010. 8. Haas, M., et al. Flexible Two IC Chipset for DVB on Cable Reception, IEEE Trans. Consum. Electr., August 1996, pp. 335–339. 9. Menkhoff, A., et al. Performance of an Advanced Receiver Chip for DVB-S and DSS, IEEE Trans. Consum. Electr., August 1999, pp. 965–969. 10. Haas, M., et al. Advanced Two IC Chipset for DVB on Satellite Reception, IEEE Trans. Consum. Electr., August 1996, pp. 341–345. 11. van der Wal, R., and L. Montreuil QPSK and BPSK Demodulator Chip-set for Satellite Applications, IEEE Trans. Consum. Electr., February 1995, p. 34.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.138

TELEVISION BROADCAST RECEIVERS 21.138

VIDEO AND FACSIMILE SYSTEMS

12. Wu, Y., and W. Zou Orthogonal Frequency Division Multiplexing: A Multi-Carrier Modulation Scheme, IEEE Trans. Consum. Electr., August 1995, pp. 392–399. 13. Fetchel, S., et al. Advanced Receiver Chip for Terrestrial Digital Video Broadcasting: Architecture and Performance, IEEE Trans. Consum. Electr., August 1998, pp. 1012–1018. 14. Sgrignoli, G., et al. VSB Modulation used for Terrestrial and Cable Broadcasts, IEEE Trans. Consum. Electr., August 1995, pp. 367–382. 15. Bretl, W., et al. VSB Modem Subsystem Design for Grand Alliance Digital Television Receivers, IEEE Trans. Consum. Electr., August 1995, pp. 773–786. 16. Advanced Television Systems Committee, Doc A/54 Guide to the Use of the ATSC Digital Television Standard, October 1995. 17 Tsunashima, K., et al. An Integrated DTV Receiver for ATSC Digital Television Standard, IEEE Trans. Consum. Electr., August 1998, pp. 667–671. 18. Bryan, D., et al. A Digital Vestigial-Sideband (VSB) Channel Decoder IC for Digital TV, IEEE Trans. Consum. Electr., August 1998, pp. 811–816. 19. Lin, W., et al. A Trellis Decoder for HDTV, IEEE Trans. Consum. Electr., August 1999, pp. 571–576. 20. Shi, Q. Chapter 8, MPEG Standards, “Digital Consumer Electronics Handbook,” McGraw-Hill, 1997, pp. 8.55–8.68. 21. Bhatt, B., et al. Grand Alliance HDTV Multi-format Scan Converter, IEEE Trans. Consum. Electr., November 1995, pp. 1020–1031. 22. Zhu, W., et al. A Fast and Memory Efficient Algorithm for Down-Conversion of an HDTV Bitstream to an SDTV Signal. IEEE Trans. Consum. Electr., February 1999, pp. 57–61. 23. Peng, S., and K. Challapali Low-Cost HD to SD Decoding, IEEE Trans. Consum. Electr., August 1999, pp. 874–878. 24. Vernon, S. Design and Implementation of AC-3 Coders, IEEE Trans. Consum. Electr., August 1995, pp. 754–759. 25. Advanced Television Systems Committee, Doc A/52A Digital Audio Compression Standard (AC-3), August 2001. 26. Kokado, N., et al. A Pocketable Liquid-Crystal Television Receiver, IEEE Trans. Consum. Electr., August 1981, Vol. CE-27, No. 3, p. 462. 27. Yomamo, M., et al. The 5-Inch Size Full Color Liquid Crystal Television Addressed by Amorphous Silicon Thin Film Transistors, IEEE Trans. Consum. Electr., February 1985, Vol. CE-31, No. 1, pp. 39–46. 28. Mercier, B., and E. Benoit A New Video Storage Architecture for Plasma Display Panels, IEEE Trans. Consum. Electr., February 1996, pp. 121–127. 29. Howe, R., and B. Welham Development in Plastic Optics for Projection Television Systems, IEEE Trans. Consum. Electr., February 1980, Vol. CE-26, No. 1, pp. 44–53. 30. Yamazaki, E., and K. Ando CRT Projection, Proceedings—Projection Display Technology, Systems and Applications, SPIE, Vol. 1081, January 19–20, 1989, pp. 30–37. 31. Ohuchi, S., et al. Ultra Portable LC Projector with High-Brightness Optical System, IEEE Trans. Consum. Electr., February 2000, pp. 221–226. 32. Lee, M.-H., et al. Hybrid LCD Panel System and Its Color Coding Algorithm, IEEE Trans. Consum. Electr., February 1997, pp. 9–16. 33. Ohara, K., and A. Kunzman Video Processing Technique for Multimedia HDTV with Digital Micro-Mirror Array, IEEE Trans. Consum. Electr., August 1999, p. 604. 34. Hutchison, D., et al. Application of Second Generation Advanced Multi-Media Display Processor (AMDP2) in a Digital Micro-Mirror Array based HDTV, IEEE Trans. Consum. Electr., August 2001, pp. 585–592. 35. Suzuki, Y., et al. Signal Processing for Rear Projection TV Using Digital Micro-Mirror Array, IEEE Trans. Consum. Electr., August 2001, pp. 579–584.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.139

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 21.5

FACSIMILE SYSTEMS Stephen J. Urban

INTRODUCTION Facsimile is one of the original electrical arts, having been invented by Alexander Bain in 1842. In its long history it has prompted or helped accelerate the development of a variety of devices and methods, including the photocell, linear phase filters, adaptive equalizers, image compression, television, and the application of transform theory of signals and images. Facsimile systems have been used for a variety of services.1 The Wall Street Journal, USA Today, and other newspapers have used facsimile to distribute their newspapers electronically, so that they can be printed locally. Weathermaps and satellite images are transmitted by other facsimile systems. Associated Press uses a form of facsimile to send photographs to newspapers. The most well-known facsimile system is the familiar one used in businesses and homes called simply “fax”—short for the Group 3 facsimile machine. The following discussion concentrates on Group 3, since it embodies techniques found in most other systems.

GROUP 3 FACSIMILE STANDARDS Facsimile standards have been developed with international cooperation by the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T, formerly the CCITT). The ITUT is made up of a number of study groups, each working on a particular aspect of telecommunications standardization. Study Group 16 (SG16) develops facsimile standards (called “Recommendations” by the ITU-T). National bodies may have their own slightly different version of the international standard. In the United States, the Telecommunications Industry Association TR-29 Committee was responsible for the development of standards for facsimile terminals and facsimile systems. TR-29, organized in the early 1960s, was a major contributor to the 1980 Group 3 Recommendation. Facsimile and modem standardization efforts have now been combined in the TIA-30 committee. The ITU-T produced the first international facsimile standard in 1968, Recommendation T.2 for Group 1 facsimile. In North America a facsimile standard was used that was similar to Rec. T.2, but with enough of a difference so that North American machines were not interoperable with the rest of the world. The Group 1 standard provided for a 6-min transmission of a nominal 210 mm by 297 mm page at a scanning density of 3.85 lines per mm. In 1976 the first truly international facsimile standard, T.3 for Group 2 facsimile, was published. A Group 2 facsimile machine transmitted a page in half the time of a Group 1 machine with about the same quality. Both of these machines were analog, and neither used image compression. In 1980, the Group 3 standards were published. Group 3 provided significantly better quality and shorter transmission time than Group 2, accomplished primarily by digital image compression. The Group 4 standard followed in 1984, with the intent to provide higher quality (twice the resolution of Group 3), higher speed (via digital networks), and more functionality. Since 1984 work has continued on both the Group 3 and Group 4

21.139 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.140

FACSIMILE SYSTEMS 21.140

VIDEO AND FACSIMILE SYSTEMS

standards, with most of the advances being made in Group 3. Although Group 4 was intended to be the next generation facsimile terminal, it is clear that Group 3 has equaled or surpassed Group 4 in terms of performance and functionality. Group 3 can now use the same compression algorithm as Group 4, can provide the same image quality, and can operate on digital networks, thereby matching the short transmission times of Group 4. Actually Group 3 may be a little faster, because its protocol overhead is less. Both Group 1 and Group 2 facsimile are now obsolete; neither are included in most new facsimile terminal implementations. The following discussion concentrates on Group 3 facsimile, which represents the vast majority of facsimile machines in current use. The Group 3 facsimile recommendations, ITU-T Recommendations T.42 and T.30,3 were first published in 1980. Recommendation T.4 covered the compression algorithm and modulation specifications, and Recommendation T.30 described the protocol. Group 3 originally was designed to operate on the general switched telephone network (GSTN) in half-duplex mode. The protocol specifies a standard basic operation that all Group 3 terminals must provide, plus a number of standard options. The various parameters and options that may be used are negotiated at 300 b/s, before the compressed image is transmitted at higher speed. All Group 3 machines must provide the following minimum set of capabilities: a pel density of 204 pels/in. horizontally and 98 pels/in. vertically; one-dimensional compression using the modified Huffman code; a transmission speed of 4800 b/s with a fallback to 2400 b/s; and a minimum time per coded scan line of 20 ms (to allow real time printer operation without extensive buffering). This requirement ensures that even the newest Group 3 facsimile machines can communicate with those designed to the 1980 Recommendations. The standards options defined in the 1980 Recommendations are the following: use of the modified READ code to achieve two-dimensional compression; a vertical pel density of 196 pels/in. to provide higher image quality; a higher transmission speed of 9600 b/s with a fall back to 7200 b/s; and a minimum coded scan line time of zero to 40 ms. Many new options have been added over time. These are described in the following pages.

RESOLUTION AND PICTURE ELEMENT (PEL) DENSITY The resolution of the scanners and printers used in facsimile apparatus (and the associated transmitted pel density) has a direct effect on the resulting output image quality. The highest pel density specified for the original Group 3 terminal was approximately 204 by 196 pels per 25.4 mm. The actual specification is 1728 pels per 215 mm by 7.7 lines/mm. This is referred to as “metric based” pel density. The Group 4 recommendations support the following standard and optional pel densities: 200 × 200, 240 × 240, 300 × 300, and 400 × 400 pels per 25.4 mm (referred to as “inch based” pel density). Note that the pel densities specified for Group 3 are “unsquare,” that is, not equal horizontally and vertically. Group 4 pel densities are “square.” This difference causes a compatibility problem. If the Group 3 pel densities were to be extended in multiples of their current pel densities, for example to 408 × 392, then a distortion of approximately 2 percent horizontally and vertically occurs when communicating with a 400 × 400 “square” machine. The ITU-T has decided to accept this distortion and encourage a gradual migration to the square pel densities. Accordingly, Group 3 has been

TABLE 21.5.1 Metric Based Pel Densities Pel density (approximate) (pels/25.4 mm) Horizontal Vertical Horizontal Vertical Horizontal Vertical

204 98 204 196 408 392

Number of picture elements along a scan line Tolerance

ISO A4

ISO B4

ISO A3

±1%

1728/215 mm

2048/225 mm

2432/303 mm

±1%

1728/215 mm

2048/225 mm

2432/303 mm

±1%

3456/215 mm

4096/225 mm

4864/303 mm

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.141

FACSIMILE SYSTEMS FACSIMILE SYSTEMS

21.141

TABLE 21.5.2 Inch Based Pel Densities Number of picture elements along a scan line Pel density (pels/25.4 mm) Horizontal Vertical Horizontal Vertical Horizontal Vertical Horizontal Vertical

200 100 200 200 300 300 400 400

Tolerance

ISO A4

ISO B4

ISO A3

± 1%

1728/219.45 mm

2048/260.10 mm

2432/308.86 mm

± 1%

1728/219.45 mm

2048/260.10 mm

2432/308.86 mm

± 1%

2592/219.45 mm

3072/260.10 mm

3648/308.86 mm

± 1%

3456/219.45 mm

4096/260.10 mm

4864/308.86 mm

enhanced to include higher pel densities, including both multiples of the original Group 3 pel densities and “square pel densities.” Specifically, optional Group 3 pel densities of 408 × 196, 408 × 392, 200 × 200, 300 × 300, and 400 × 400 pels per 25.4 mm have been added. These are summarized in Tables 21.5.1 and 21.5.2. The metric-based pel densities and their picture elements are given in Table 21.5.1. Specific values for the number of pels per line are given in Table 21.5.1 for all the Group 3 pel densities for ISO A4, ISO B4, and ISO A3. The optional inch-based pel densities and their picture elements are shown in Table 21.5.2. Specific values for the number of pels per line are given in Table 21.5.2 for all the Group 3 pel densities for ISO A4, ISO B4, and ISO A3. NOTE: An alternative standard pel density of 200 pels/25.4 mm horizontally × 100 lines/25.4 mm vertically may be implemented provided that one or more of 200 × 200 pels/25.4 mm, 300 × 300 pels/25.4 mm, and 400 × 400 pels/25.4 mm are included.

PROTOCOL A facsimile call is made up of five phases, as shown in Fig. 21.5.1. In phase A (call establishment) the telephone call is placed, either manually or automatically by the facsimile terminal(s). In phase B (premessage procedure) the call station responds with signals identifying its capabilities. The calling station then sends signals to indicate the mode of operation to be used during the call (e.g., transmission speed, resolution, page size,

FIGURE 21.5.1 The five phases of a facsimile call.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.142

FACSIMILE SYSTEMS 21.142

VIDEO AND FACSIMILE SYSTEMS

image coding method). The option selection must be consistent with the capability set received from the called station. Phase C (in-message procedure and message transmission) includes both procedural signaling and the transmission of the facsimile image. Procedural signaling includes, for example, a modem training sequence from the sending station that allows the receiving station to adapt its modem to the telephone line characteristics. If the receiving station trains successfully and is ready, it responds with a confirmation to receive (CFR) signal. Upon receipt of the CFR, the sending station sends the facsimile image at the negotiated higher speed. As in phases B and D, all signaling is communicated at 300 b/s using high-level data link control (HDLC), before the compressed image is transmitted at higher speed. Phase D (postmessage procedure) includes information of end-of-message, confirmation of the reception of the message, and transmission of further message information. In phase E the call is disconnected.

DIGITAL IMAGE COMPRESSION One-Dimensional Coding Scheme––Modified Huffman Code (MHC) A digital image to be transmitted by facsimile is formed by scanning a page from left to right and top to bottom, producing a bit map of picture elements. A scan line is made up of runs of black and white pels. Instead of sending bits corresponding to black and white pels, coding efficiency can be gained by sending codes corresponding to the lengths of the black and white runs. A Huffman procedure4 uses variable length codes to represent the run lengths; the shortest codes are assigned to those run lengths that occur most frequently. Run-length frequencies are tabulated from a number of “typical” documents, and are then used to construct the code tables. True Huffman coding would require twice 1729 code words to cover a scan line of 1728 pels. To shorten the table, the Huffman technique was modified for Group 3 facsimile to include two sets of code words, one for lengths of 0 to 63 (terminating code table), and one for multiples of 64 (make-up code table). Run lengths in the range of 0 to 63 pels use terminating codes. Run lengths of 64 pels or greater are coded first by the appropriate make-up code word specifying the multiple of 64 less than or equal to the run length, followed by a terminating code representing the difference. For example, a 1728-pel white line would be encoded with a make-up code of length 9 representing a run of length 1728, plus a terminating code of length 8 representing a run of length zero, resulting in a total code length of 17 bits (without the synchronizing code). When the code tables for Group 3 were constructed, images containing halftones were deliberately excluded, so as not to skew the code tables and degrade the compression performance for character-based documents. The modified Huffman code is mandatory for all Group 3 machines, providing a basis for interoperability. It is relatively simple to implement, and produces acceptable results on noisy telephone lines. In order to ensure that the receiver maintains color synchronization, all coded lines begin with a white run length code word. If the actual scan line begins with a black run, a white run length of zero is sent. Black or white run lengths, up to a maximum length of one scan line (1728 picture elements or pels) are defined by the code word in Tables 21.5.3 and 21.5.4. Note that there is a different list of code words for black and white run lengths. Each coded line begins with an End-of-Line (EOL) code. It is a unique code word that can never be found within a valid coded scan line; therefore, resynchronization after an error burst is possible. A pause may be placed in the message flow by transmitting FILL. FILL is a variable length string of 0s that can only be placed after a coded line and just before an EOL, but never within a coded line. Fill must be added to ensure that the transmission time of the total coded scan line is not less than the minimum transmission time established in the premessage control procedure. The end of a document transmission is indicated by sending six consecutive EOLs.

Two-Dimensional Coding Scheme—Modified READ (MR) The MR coding method makes use of the vertical correlation between black (or white) runs from one scan line to the next (called vertical mode). In vertical mode, transitions from white to black (or black to white) are coded relative to the line above. If the transition is directly under (zero offset), the code is only one bit. Only fixed

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.143

FACSIMILE SYSTEMS FACSIMILE SYSTEMS

TABLE 21.5.3 Terminating Codes White run length 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

Code word 00110101 000111 0111 1000 1011 1100 1110 1111 10011 10100 00111 01000 001000 000011 110100 110101 101010 101011 0100111 0001100 0001000 0010111 0000011 0000100 0101000 0101011 0010011 0100100 0011000 00000010 00000011 00011010 00011011 00010010 00010011 00010100 00010101 00010110 00010111 00101000 00101001 00101010 00101011 00101100 00101101 00000100 00000101 00001010 00001011 01010010 01010011

Black run length 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

Code word 0000110111 010 11 10 011 0011 0010 00011 000101 000100 0000100 0000101 0000111 00000100 00000111 000011000 0000010111 0000011000 0000001000 00001100111 00001101000 00001101100 00000110111 00000101000 00000010111 00000011000 000011001010 000011001011 000011001100 000011001101 000001101000 000001101001 000001101010 000001101011 000011010010 000011010011 000011010100 000011010101 000011010110 000011010111 000001101100 000001101101 000011011010 000011011011 000001010100 000001010101 000001010110 000001010111 000001100100 000001100101 000001010010 (Continued)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

21.143

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.144

FACSIMILE SYSTEMS 21.144

VIDEO AND FACSIMILE SYSTEMS

TABLE 21.5.3 Terminating Codes (Continued) White run length 51 52 53 54 55 56 57 58 59 60 61 62 63

Code word

Black run length

Code word

01010100 01010101 00100100 00100101 01011000 01011001 01011010 01011011 01001010 01001011 00110010 00110011 00110100

51 52 53 54 55 56 57 58 59 60 61 62 63

000001010011 000000100100 000000110111 000000111000 000000100111 000000101000 000001011000 000001011001 000000101011 000000101100 000001011010 000001100110 000001100111

offsets of zero and ±1, ±2, and ±3 are allowed. If vertical mode is not possible (for example, when a nonwhite line follows an all-white line) then horizontal mode is used. Horizontal mode is simply an extension of MHC; that is, two consecutive runs are coded by MHC and preceded by a code indicating horizontal mode. To avoid the vertical propagation of transmission errors to the end of the page, a one-dimensional (MHC) line is sent every Kth line. The factor K is typically set to 2 or 4, depending on whether the vertical scanning density is 100 or 200 lines per inch. The K-factor is resettable, which means that a one-dimensional line may be sent more frequently (than 2 or 4) when considered necessary by the transmitter. The synchronization code (EOL) consists of eleven 0s followed by a 1, followed by a tag bit to indicate whether the following line is coded one-dimensionally or two-dimensionally. The two-dimensional coding scheme is an optional extension of the one-dimensional coding scheme. It is defined in terms of changing picture elements (see Fig. 21.5.2). A changing element is defined as an element whose color (i.e., black or white) is different from that of the previous element along the same scan line. a0 The reference or starting changing element on the coding line. At the start of the coding line a0 is set on an imaginary white changing element situated just before the first element on the line. During the coding of the coding line, the position of a0 is defined by the previous coding mode. a1 The next changing element to the right of a0 on the coding line. a2 The next changing element to the right of a1 on the coding line. b1 The first changing element on the reference line to the right of a0 and of opposite color to a0 b2 The next changing element to the right of b1 on the reference line. Code words for the two-dimensional coding scheme are given in Table 21.5.5.

Extended Two-Dimensional Coding Scheme—Modified Modified READ (MMR) The basic facsimile coding scheme specified for Group 4 facsimile (MMR) may be used as an option in Group 3 facsimile. This coding scheme must be used with the Error Correction Mode (ECM) option, described below. This coding scheme, described in ITU-T Recommendation T.6,5 is very similar to that of Group 3 (Rec. T.4). The same modified READ code is used, but only two-dimensional lines are transmitted with no EOL codes for synchronization. An error-free communication link makes this possible. A white line is assumed before the first actual line in the image. No fill is used; adequate buffering is assumed to provide a memory to memory transfer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.145

FACSIMILE SYSTEMS FACSIMILE SYSTEMS

TABLE 21.5.4 Make Up Codes White run lengths

Code word

Black run lengths

Code word

64 128 192 256 320 384 448 512 576 640 704 768 832 896 960 1024 1088 1152 1216 1280 1344 1408 1472 1536 1600 1664 1728 EOL

11011 10010 010111 0110111 00110110 00110111 01100100 01100101 01101000 01100111 011001100 011001101 011010010 011010011 011010100 011010101 011010110 011010111 011011000 011011001 011011010 011011011 010011000 010011001 010011010 011000 010011011 000000000001

64 128 192 256 320 384 448 512 576 640 704 768 832 896 960 1024 1088 1152 1216 1280 1344 1408 1472 1536 1600 1664 1728 EOL

0000001111 000011001000 000011001001 000001011011 000000110011 000000110100 000000110101 0000001101100 0000001101101 0000001001010 0000001001011 0000001001100 0000001001101 0000001110010 0000001110011 0000001110100 0000001110101 0000001110110 0000001110111 0000001010010 0000001010011 0000001010100 0000001010101 0000001011010 0000001011011 0000001100100 0000001100101 000000000001

Note: For those machines that choose to accommodate larger paper widths or higher pel densities, the following Make Up Code Set may be used:

Run length (black and white) 1792 1856 1920 1984 2048 2112 2176 2240 2304 2368 2432 2496 2560

Make up codes 00000001000 00000001100 00000001101 000000010010 000000010011 000000010100 000000010101 000000010110 000000010111 000000011100 000000011101 000000011110 000000011111

Note: Run lengths in the range of lengths longer than or equal to 2624 pels are coded first by the make-up code of 2560. If the remaining part of the run (after the first make-up code of 2560) is 2560 pels or greater, additional make-up codes(s) of 2560 are issued until the remaining part of the run becomes less than 2560 pels. Then the remaining part of the run is encoded by terminating code or by make-up code plus terminating code according to the range as mentioned above.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

21.145

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.146

FACSIMILE SYSTEMS 21.146

VIDEO AND FACSIMILE SYSTEMS

FIGURE 21.5.2 Changing picture elements.

The objective of each of the image compression schemes is to reduce the number of bits transmitted and thus the transmission time (and cost). The more aggressive schemes provide the most compression, at a cost of increased implementation complexity. In general, T.6 coding outperforms T.4 two-dimensional coding which outperforms T.4 one-dimensional coding. The differences tend to be smaller on “busy” images and greater on images containing more white space. On dithered images and halftones the compression is very poor (or even negative), and one-dimensional coding outperforms the other methods in some cases.

MODULATION AND DEMODULATION METHODS (MODEMS) Every Group 3 facsimile machine must be able to operate at the standard modem speeds of 4.8 and 2.4 kb/s according to ITU-T Recommendation V.27ter. The receive modem has an automatic equalizer to compensate for the telephone line amplitude distortion and envelope-delay distortion, improving the accuracy of the delivered digital facsimile signal. Optional modems are V.29 (9.6 and 7.2 kb/s), and V.17 with Trellis coding for improved error immunity (14.4, 12, 9.6, and 7.2 kb/s) and V.34 for speeds up to 33.6 kb/s. Group 3 facsimile modems adapt to the transmission characteristics of the telephone connection. Before sending fax data, the modem sends a standard training signal. The receiver uses this signal to “adapt” to the electrical characteristics of the connection. The highest rate available in both facsimile machines is tried first. If this speed would give too many errors, the next lower speed is tried by the transmitter. If this fails, the modem rate again steps down to the next lower speed. This system assures transmission at the highest rate consistent with the quality of the telephone line connection. Standard Operation—V.27ter (4.8 and 2.4 kb/s) At 4.8 kb/s, the modulation rate (or baud rate) is 1600 baud or 1.6 kB. The data stream to be transmitted is divided into groups of three consecutive bits (tribits). Each is encoded as a phase change relative to the phase TABLE 21.5.5 Two-Dimensional Code Table Mode PASS HORIZONTAL VERTICAL

Elements to be coded b1,b2 a0a1,a1a2 a1 just under b1 a1 to the right of b1 a1 to the left of b1

a1b1 = 0 a1b1 = 1 a1b2 = 2 a1b1 = 3 a1b1 = 1 a1b1 = 2 a1b1 = 3

Notation P H V(0) VR(1) VR(2) VR(3) VL(1) VL(2) VL(3)

Code word 0001 001 + M(a0a1) + M(a1a2)Note 1 011 000011 0000011 010 000010 0000010

Note: Code M() of Horizontal mode represents the code words in Tables 21.5.3 and 21.5.4.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.147

FACSIMILE SYSTEMS FACSIMILE SYSTEMS

21.147

TABLE 21.5.6 Tribit Values and Phase Changes Tribit values 0 0 0 0 1 1 1 1

0 0 1 1 1 1 0 0

1 0 0 1 1 0 0 1

Phase change 0° 45° 90° 135° 180° 225° 270° 315°

TABLE 21.5.7 Dibit Values and Phase Changes Dibit values 00 01 11 10

Phase change 0° 90° 180° 270°

of the preceding signal tribit element (see Table 21.5.6). At the receiver, the tribits are decoded and the bits are reassembled in correct order. At 2.4 kb/s, the modulation rate is 1200 baud or 1.2 kB. The data stream to be transmitted is divided into groups of two bits (dibits). Each dibit is encoded as a phase change relative to the phase of the immediately preceding signal element (see Table 21.5.7). At the receiver the dibits are decoded and reassembled in the correct order.

Optional Operation A—V.29 (9.6 and 7.2 kb/s) For optional higher speed operation, such as may be possible on high-quality circuits, the optional mode may be used. At all speeds, the modulation rate is 2.4 kB. At 14.4 kb/s the scrambled data stream to be transmitted is divided into groups of six consecutive data bits and mapped onto a signal space of 128 elements. At 12 kb/s the scrambled data stream is divided into groups of five consecutive data bits and mapped onto a signal space of 64 elements. At 9.6 kb/s the data are divided into groups of four consecutive data bits and mapped onto a signal space of 32 elements. At 7.2 kb/s the data are divided into groups of three consecutive data bits and mapped onto a signal space of 16 elements. The V.17 modem is more robust than V.29, due to its trellis coding method; e.g., V.17 can operate at 9.6 kb/s on telephone lines where V.29 cannot. Higher data rates are theoretically possible, and are highly desirable for transmission of continuous tone color. The V.34 modem standard approved in 1994 provides speeds to 33 kb/s. The use of this modem is described in Annex F of Recommendation T.30. This modem also uses trellis coding and a number of symbol rates from 2400 to 3429 symbols/per second. It requires the use of ECM and runs in half-duplex for facsimile. An unusual feature of this modem is the use of Recommendation V.8 for the modem start-up procedures. V.8 provides the means to determine the best mode of operation before the initiation of the handshake.

INTERNET FACSIMILE The availability of the Internet for international communication provides the potential for using this transmission medium in the transfer of Group 3 facsimile messages between terminals. Since the characteristics of IP networks differ from those provided by the PSTN, some additional provisions need to be standardized to maintain successful facsimile operation. Service requirements for Internet facsimile are defined in Recommendation F.185. Two methods have been defined to meet these service requirements: a store-and-forward method akin to email and a real-time method that is largely transparent as far as the endpoints are concerned.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.148

FACSIMILE SYSTEMS 21.148

VIDEO AND FACSIMILE SYSTEMS

Store-and-Forward Facsimile Store-and-forward Internet facsimile occurs when the sending and receiving terminal are not in direct communication with each other. The transmission and reception takes place via the store-and-forward mode on the Internet using Internet email. In store-and-forward mode the facsimile protocol “stops” at the gateway to the Internet. It is reestablished at the gateway leaving the Internet. Two modes of store-and-forward facsimile are defined. In “Simple” mode store-and forward only the coded image is transmitted. In “Full” mode store-and-forward facsimile over the Internet three requirements must be satisfied: 1. The capabilities of the terminals are exchanged. 2. An acknowledgement of receipt is exchanged between gateways and may be transferred from the receiving terminal to the sending terminal. 3. The contents of standard messages used by the transmitting terminal are preserved. ITU-T Recommendation T.37, procedures for the transfer of facsimile data via store-and-forward on the Internet, approved in 1998, standardized the Simple mode. It references a set of IETF documents called “Request for Comment” documents (RFCs) that define the procedures for facsimile communication over the Internet using email. Full mode was added in 1999 with Addendum 1. The intention is that Full mode should support, to the greatest extent possible, the standard and optional features of Group 3 facsimile. This would include, among others, delivery notification, capabilities exchange, color and other optional coding mechanisms, and file transfer.

Real-Time Internet Facsimile Recommendation T.38, procedures for real-time Group 3 facsimile communication between terminals over IP networks, was approved by the ITU-T in 2002. This standard describes how facsimile transmission takes place between Internet gateways. Communication between the facsimile machine and the gateway is by means of T.30. The T.38 procedures work in conjunction with Rec. H.323 and Rec. H.225. The transmission method in T.38 may be described as a demodulation/remodulation (demod/remod) procedure. The facsimile transmission arrives at the gateway as a data stream modulated by the facsimile modem into an analog signal. At the gateway the signal is demodulated. It is then packaged into data packets to be conveyed over the Internet by the emitting gateway. The receiving gateway reassembles the packets and modulates them into a modem signal that continues on to the receiving facsimile terminal. Neither facsimile terminal is aware that part of the transmission was over the PSTN and part over the Internet. Once the PSTN calls are established on both ends, the two Group 3 terminals are virtually linked. All T.30 session establishment and capabilities negotiation is carried out between the terminals. An alternate scenario would be a connection to a facsimile-enabled device (for example, a PC), which is connected directly to an IP network. In this case, there is a virtual-receiving gateway as part of the device’s facsimile-enabling software and/or hardware. Transmission Protocols. There are two types of transmission protocols over the Internet: TCP and UDP. The TCP protocol is error free (via retransmission) so that errors translate into delay. UDP has no error control. Errors manifest themselves as lost packets. T.30 is very sensitive to delay, since there are a number of timers active in the protocol, which, if they expire, would cause transmission to fail. Lost packets could have serious consequences. If a packet is lost in the image transmission phase, it could cause lost scan lines in the image, unless ECM error control was in effect. Modem Rate Alignment. When the TCP protocol method is selected, each gateway independently provides rate negotiation with the facsimile modems and training is carried out independently. It would therefore be possible to have each end of the connection negotiate to a different data rate, which would be a problem. To prevent this from occurring, a set of messages is provided to align the modem speeds. Because of the lower delays experienced with using UDP, the modems can negotiate data rates without relying on gateways. The V.34 modem is not used with Internet facsimile because of the added complexity.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.149

FACSIMILE SYSTEMS FACSIMILE SYSTEMS

21.149

Error Correction Mode The optional error correction mode applies to one-dimensional and two-dimensional coding, and provides true error correction. The primary objective of ECM is to perform well against burst errors. Additional objectives included backwards compatibility with existing facsimile machines, and minimizing the transmission overhead in channels with low error rates. The error-correction scheme is known as page selective repeat ARQ (automatic repeat request). The compressed image data are embedded in high-level data link control (HDLC) frames of length 256 octets or 64 octets, and transmitted in blocks of 256 frames. The communication link operates in a half-duplex mode; that is, the transmission of image data and the acknowledgment of the data are not sent at the same time. The technique can be thought of as an extension to the Group 3 protocol. The protocol information is also embedded in HDLC frames, but does not use selective repeat for error control. Every Group 3 facsimile machine must have the mechanism to transmit and receive the basic HDLC frame structure, including flags, address, control, and frame check sequence. Thus the use of an extended HDLC scheme helped to minimize changes to existing facsimile designs. The transmitting terminal divides the compressed image data into 256-octet or 64-octet frames and sends a 256-octet block of frames to the receiving terminal. (The receiving terminal must be able to receive both frame sizes). Each transmitted frame has a unique frame number. The receiver requests retransmission of bad frames by frame number. The transmitter retransmits the requested frames. After four requests for retransmission for the same block, the transmitter may stop or continue, with optional modem speed fall back. The page selective repeat ARQ is a good compromise6 that balances complexity and throughput. A continuous selective repeat ARQ provides slightly higher throughput, but requires a modem back channel. Forward error correction (FEC) schemes typically have higher overhead on good connections, can be more complex, and may break down in the presence of burst errors. In addition to providing the capability of higher throughput on noisy lines, the error correction mode option supports an error-free environment that has enabled many new features. Examples of these are T.6 encoding, color fax, and secure operation.

COLOR FACSIMILE Group 3 facsimile is capable of transmitting color, including color or gray scale photographs and colored text and line art. Much research has been devoted to the development of a “universal” compression technique that would apply to both photographs and text, but the best approach to achieve high compression ratios and retain quality is to compress the different image types according to their individual attributes. Photographs are compressed using an approach that puts a high emphasis on maintaining the smoothness and accuracy of the colors. Text and line art are compressed with an approach that puts high emphasis on maintaining the detail and structure of the input. The method to transmit continuous tone color was added to Group 3 facsimile in 1994, with the approval of Recommendation T.42. This method is based on the popular Joint Photographic Experts Group (JPEG) standard. Although this was a good first step toward a color facsimile capability, it was not an efficient solution for text or line art. To correct this problem, Recommendation T.43, based on JBIG, was developed to accommodate text and line art with some limited color capability. The Mixed Raster Content (MRC) option was added to correct a weakness of the color facsimile options that use JPEG and JBIG; that is, before MRC there was no standard way of efficiently coding a page that contained both text and color photographic content. The MRC option, as implemented by ITU-T Recommendation T.44, is a way of describing documents with both bi-level and multilevel data within a page. These methods are outlined in the following paragraphs. Coding Continuous Tone Color and Gray Scale Images The facsimile standards include an option for the transmission of continuous-tone color and gray-scale images, based on an ISO/ITU-T standard7 commonly known as JPEG. As a result of diverse requirements, the JPEG compression algorithm is not a single algorithm but a collection of techniques that are often referred to as a toolkit. The intent is that applications, such as facsimile, will have a “customized” subset of the JPEG components. A detailed description of JPEG with a comprehensive bibliography is given in Ref. 8.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.150

FACSIMILE SYSTEMS 21.150

VIDEO AND FACSIMILE SYSTEMS

JPEG specifies two classes of coding processes: lossy and lossless processes. The lossy processes are all based on the discrete cosine transform (DCT), and the lossless are based on a predictive technique. There are four modes of operation under which the various processes are defined: the sequential DCT-based mode, the progressive DCT-based mode, the sequential lossless mode, and the hierarchical mode. In the sequential DCT-based mode, 8 × 8 blocks of pixels are transformed, the resulting coefficients are quantized and then entropy coded (losslessly) by Huffman or arithmetic coding. The pixel blocks are typically formed by scanning the image (or image component) from left to right, and then block row by block row from top to bottom. The allowed sample precisions are 8 and 12 bits per component sample. All decoders that include any DCT-based mode of operation must provide a default decoding capability, referred to as the baseline sequential DCT process. This is a restricted form of the sequential DCT-based mode, using Huffman coding and 8 bits per sample precision for the source image. The application of JPEG to facsimile is based on the baseline sequential DCT process. Continuous tone color was added to Group 3 facsimile with the approval of Recommendation T.42. In order to represent continuous tone color data accurately and uniquely, a device-independent interchange color space is required. This color image space must be able to encode the range of hard copy image data when viewed under specified conditions. In addition to the basic color space, the reference white point, illuminant type, and gamut range must also be specified. The image pixel values are represented in the CIE 1976 (L* a* b*) color space, often referred to as CIELAB. This color space, defined by the CIE (Commission Internationale de 1’Eclairage), has approximately equal visually perceptible difference between equispaced points throughout the space. The three components are L*, or luminance, and a* and b* in chrominance. The luminance-chromaticity spaces offer gray-scale compatibility and higher DCT-based compression than other spaces. The human eye is much more sensitive to luminance than chrominance; thus it is easier to optimize the quantization matrix where luminance and chrominance are separate components. Subsampling the chrominance components provides further compression. The basic resolution is 200 pels/25.4 mm. Allowed values include 200, 300, and 400 pels/25.4 mm, with square (or equivalent) pels. At 200 × 200 pels/25.4 mm, a color photograph (A4 paper size) compressed to 1 bit per pixel and transmitted at 64 kb/s would require about 1 min to send. The selection of 8-bit or 12-bit data precision can also have an effect on the data compression. Subsampling and data precision, as well as the ability to send color, are negotiated in Phase B.

Lossless Coding of Color and Gray Scale Images Recommendation T.43 was developed to accommodate text and line art with some limited color capability. Recommendation T.43 defines a lossless color data representation method using Recommendation T.82, which was prepared by the Joint Bi-level Image Experts Group (JBIG). In this recommendation, three types of images are treated: the first type is a one-bit-per-color CMY(K) or RGB image; the second type is a palletized color image in which palette tables are specified with the CIELAB color space defined in Recommendation T.42; and the last type is a continuous-tone color and gray-scale image also specified with the CIELAB color space. Images can be created in a variety of ways, including conventional scanning, computer generation, or by image processing techniques such as one of the dither methods. Recommendation T.43 was approved in July 1997. The following section presents an overview of JBIG compression, followed by a description of the three image types treated by Recommendation T.43. JBIG overview (ITU-T T.82/ISO11544). The progressive bi-level coding technique consists of repeatedly reducing the resolution of a bi-level image, creating subimages each having one-half the number of pels per line and one-half the number of lines of the previous image. The lowest-resolution image, called the base layer, is transmitted losslessly (free of distortion) by binary arithmetic coding. The next higher resolution image is transmitted losslessly, using its own pels and previously transmitted (causal) pels as predictors in an attempt to predict the next pel to be transmitted. If prediction is possible (both transmitter and receiver are equipped with rules to tell whether this is the case), the predicted pel value is not transmitted. This progressive build-up is repeated until the final image has been losslessly transmitted (the process stopped at the receiver’s request). A sequential mode of transmission also exists. It consists of performing the entire progressive transmission on

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.151

FACSIMILE SYSTEMS FACSIMILE SYSTEMS

21.151

successive horizontal stripes of the original image. The algorithm performs image reduction, typical prediction, deterministic prediction, and binary arithmetic encoding and decoding. Recommendation T.43. The one-bit-per-color mode was intended to represent images with primary colors using the CMY(K) or RGB color space. Each bit-plane indicates the existence of one of the primary colors. The image is encoded with JBIG and transmitted. The receiver represents the image on a CRT (soft-copy) or a printed (hard copy) using its own primary colors. The colors of the document may not be represented accurately at the receiver in the one-bit-per-color mode since neither RGB nor CMY(K) are device-independent color spaces. A typical application of the mode is to transmit business correspondence containing a colored logo. The palletized color mode expands the possible number of colors that may be used to characterize an image, and in addition provides the capability for accurate color reproduction. Both of these features are achieved by using color palette table data as specified by the device-independent interchange color space (CIELAB) defined in Recommendation T.42. The price for this added capability is coding (and transmission) efficiency, that is, the compression is less so the facsimile transmission takes longer. The continuous-tone color mode provides the highest color accuracy of the three modes in Recommendation T.43 and the lowest compression efficiency. In this mode, an original continuous-tone color or gray-scale image is generated in the color space defined in Recommendation T.42 (CIELAB). This mode provides lossless encoding using JBIG bit-plane compression. Gray-code conversion on the bit planes is used to improve the compression efficiency.

Coding Images with Mixed Raster Content The MRC option, as implemented by ITU-T Recommendation T.44, is a way of describing raster-oriented (scanned or synthetic) documents with both bi-level and multilevel, continuous-tone or palettized colors (contone) usually associated with naturally occurring images: bi-level detail associated with text and line art and multilevel colors associated with the text and line art. The goal of MRC is to make the exchange of rasteroriented mixed content color documents possible with higher speed, higher image quality, and modest computing resources. This efficiency is realized through segmentation of the image into multiple layers (planes) as determined by image type, and applying image specific encoding and spatial and color resolution processing. The MRC method defines no new image compression methods, but does require that all previously defined compression methods used in Group 3 facsimile be supported.

SECURE FACSIMILE Group 3 secure facsimile, including new Annexes to T.30 and a new Recommendation T.36, was approved by the ITU-T in July 1997. The new and amended recommendations accommodate two methods—one based on a public-key cryptosystem and one based on a secret-key cryptosystem. The public-key management system is based on the method devised by Ron Rivest, Adi Shamir, and Leonard Adleman. It is called RSA after the initials of its inventors. The secret-key method is based on the Hawthorne Key Management (HKM) system, the Hawthorne Facsimile Cipher (HFX40), and the HFX40-I message integrity system, hereafter referred to as the HKM/HFX cryptosystem. Both systems are incorporated into the Group 3 facsimile protocol, and either may be used independently. Procedures Using the RSA Security System The procedures for the RSA security system are defined in Annex H of ITU-T Recommendation T.30. The RSA security system uses one pair of keys (encipherment public key and encipherment secret key) for document encryption. The registration mode permits the sender and receiver to register and store the public keys of the other party prior to secure facsimile transmission. Two parties wishing to communicate can register their public keys with

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_21.qxd

10/28/04

11:17 AM

Page 21.152

FACSIMILE SYSTEMS 21.152

VIDEO AND FACSIMILE SYSTEMS

another user in two steps. First, the sender and receiver each hash their identities and public key(s) and the hash results are exchanged out of band (directly, by mail, by phone, and so forth) and stored in the terminals. Then the identities and public keys of the two parties are exchanged and stored in the registration mode of the T.30 protocol. The validity of the identity and public key(s) of the other party is assessed by hashing these values and comparing them with the hash result that was exchanged out of band. An optional security page follows the last page of the transmitted document. The security page contains the following parameters: security page indicator, identity of the sender, public key of the sender, identity of the recipient, random number created by the recipient, time stamp, length of the document, digital signature of the entity in brackets, certificate of the public key of the sender, and security-page-type identification. One other significant issue is the key length. Because of varying regulations of different governments, it was agreed to limit the session key length for the RSA secure facsimile system to 40 bits. The amendment also adds a redundancy mechanism that repeats the 40-bit session key to fit the length of the various encipherment algorithms whose key lengths are required to be longer than 40 bits.

Procedures Using the HKM/HFX Cryptosystem The secret-key method consists of the Hawthorne Key Management system, the HFX40 carrier cipher (encryption algorithm), and the HFX40-I message integrity system (hashing algorithm). The HKM key management algorithm includes a registration procedure, and a secure transmission of a secret key that enables subsequent transmission to be provided securely. These procedures are defined as ITU-T Recommendation T.36. In the registration mode, the two terminals exchange information that enables them to uniquely identify each other. This is based on the agreement between the users of a secret one-time key that must be exchanged securely (not defined by the Recommendations). Each terminal stores a 16-digit number that is uniquely associated with the terminal with which it has carried out registration. This number is used together with a challenge procedure to provide mutual authentication and document confirmation. The procedure is also used to transmit the session key to be used for document encryption and hashing. An override mode is also provided that bypasses the exchange of security signals between two terminals. Just as in the classic secret-key cryptosystem approach, this mode depends on the secure exchange of a secret key outside the system. This key is used by the transmitting terminal to encrypt the document and by the receiving terminal to decrypt the document.

REFERENCES 1. K. R. McConnell, D. Bodson, and S. Urban, FAX, 3rd ed., Artech House. 2. ITU-T Recommendation T.4, Standardization of Group 3 Facsimile Apparatus for Document Transmission. 3. ITU-T Recommendation T.30, Procedures for Document Facsimile Transmission in the General Switched Telephone Network. 4. D. A. Huffman, “A Method for the Construction of Minimum Redundancy Codes,” Proc. IRE, Vol. 40, pp. 1098–1101, September 1952. 5. ITU-T Recommendation T.6, Facsimile Coding Schemes and Coding Control Functions for Group 4 Facsimile Apparatus, Vol. VII-Fascimile VII.3, pp. 48–57. 6. “Error Control Option for Group 3 Facsimile Equipment,” National Communications System Technical Information Bulletin, 87-4, January 1987. 7. ISO DIS 10918-1/ITU-T T.81, “Digital Compression of Continuous-Tone Still Images, Part I: Requirements and Guidelines.” 8. J. L. Mitchell, and W. B. Pennebaker, JPEG Still Image Data Compression Standard, Van Nostrand Reinhold, 1993.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 22

BROADCAST AND CABLE SYSTEMS Coded orthogonal frequency multiplex (COFDM) is used in new audio transmission systems so that numerous low-power broadcast transmitters spaced over a wide area can broadcast the identical signal on the same identical frequency. As an automobile travels around the region, CD quality reception is obtained continuously with no interference or receiver retuning required. Radio broadcast data systems have digital subcarriers superimposed or added to the existing channel spectrum. These subcarriers transmit such information to the receivers as station call letters, road conditions, program or music names, and other data believed to be useful in a moving automobile. One of digital technology’s main advantages is the applicability of computer processing power and features so that the challenges of scrambling and signal security can be approached with new techniques. Unfortunately, with cable television systems, this advanced digital technology also provides new means of attack for the would-be cable television pirate. To counter this problem, it must be possible to respond to a security attack by replacing the breached security element. If the cable operator owns the set-top boxes, they can be replaced. If the set-top boxes were sold to the subscriber, a plug-in card or module, called the point of deployment module, is used. It remains the property of the cable operator and can be disabled electronically. The subscriber would then be given a new POD device based on a system not yet breached. R.J.

In This Section: CHAPTER 22.1 BROADCAST TRANSMISSION PRACTICE INTRODUCTION ALLOCATIONS BROADCASTING EQUIPMENT BIBLIOGRAPHY ON THE CD-ROM

22.3 22.3 22.3 22.23 22.35 22.35

CHAPTER 22.2 AM AND FM BROADCAST RECEIVERS AM RECEIVERS: GENERAL CONSIDERATIONS FM BROADCAST RECEIVERS: GENERAL CONSIDERATIONS DIGITAL RADIO RECEIVERS REFERENCES

22.37 22.37 22.44 22.50 22.57

CHAPTER 22.3 CABLE TELEVISION SYSTEMS INTRODUCTION HISTORICAL PERSPECTIVE SPECTRUM REUSE CABLE NETWORK DESIGN SIGNAL QUALITY

22.58 22.58 22.58 22.59 22.59 22.61

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.2

BROADCAST AND CABLE SYSTEMS

DIGITAL VIDEO CABLE SYSTEM TRADE-OFFS TECHNICAL DETAIL PROJECTIONS FOR DIGITAL CABLE BIBLIOGRAPHY ON THE CD-ROM

On the CD-ROM: Frequency Channel Assignments for FM Broadcasting Numeral Designation of Television Channels Zone Designations for Television Broadcasting Minimum Distance Separation Requirements for FM Broadcast Transmitters in KM (MI) Medium Wave AM Standard Broadcasting Definitions Frequency Modulation Broadcasting Definitions Analog Television (NTSC) Definitions ATSC Digital Transmission System Definitions Short Wave Broadcasting Definitions Projections for Digital Cable

22.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

22.61 22.63 22.64 22.71 22.72 22.72

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 22.1

BROADCAST TRANSMISSION PRACTICE Earl F. Arbuckle, III

INTRODUCTION Broadcasting refers to those wireless services, such as standard broadcast, FM, television, and short-wave, which are intended for free reception by the general public. Point-to-point communications links (i.e., two-way radio) and point-to-multipoint private or subscription-based transmission systems (i.e., MMDS, ITFS, and DBS satellite) are not included in this definition and are therefore not discussed in this section. In the United States, broadcasting activities are regulated by the Federal Communications Commission (FCC). The FCC is empowered by the Communications Act of 1934 (as amended) to administratively control the use of radio frequency (RF) spectrum by means of licensing users within an allocations framework developed through international consensus at various World Administrative Radio Conferences (WARC). Potential licensees must demonstrate by formal application and engineering studies that a license grant would conform to spectrum allocation, technical performance, and public service criteria spelled out in the FCC Rules. In the following paragraphs, reference is made, when appropriate, to the specific section of the FCC Rules relevant to the paragraph. Because technology is developing rapidly and the FCC Rules are dynamic, the reader is warned to consult them directly whenever specific compliance must be determined. For the same reason, this handbook will state only the fundamental tenets of those rules which are likely to remain immutable.

ALLOCATIONS Standard Broadcast (AM) An amplitude-modulated (AM) standard broadcast channel is that band of frequencies occupied by the carrier and the upper and lower sidebands with the carrier frequency at the center. Channels are designated by their assigned carrier frequencies. The AM broadcast band consists of 117 carrier frequencies which begin at 540 kHz and progress in 10 kHz steps to 1700 kHz. (FCC Rules 73.14) Classes of AM broadcast channels and stations are: Clear channel. One on which stations are assigned to serve wide areas. These stations are protected from objectionable interference within their primary service areas and, depending on the class of station, their secondary service areas. Stations operating on these channels are classified as follows: Class A station. An unlimited time station that operates on a clear channel and is designed to render primary and secondary service over an extended area and at relatively long distances from its transmitter. Its primary service area is protected from objectionable interference from other stations on the same and 22.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.4

BROADCAST TRANSMISSION PRACTICE 22.4

BROADCAST AND CABLE SYSTEMS

adjacent channels, and its secondary service area is protected from interference from other stations on the same channel. The operating power shall not be less than 10 kW nor more than 50 kW. Class B station. An unlimited time station, which is designed to render service only over a primary service area. Class B stations are authorized to operate with a minimum power of 0.25 kW (or, if less than 0.25 kW, an equivalent of RMS antenna field of at least 141 mV/m at 1 km) and a maximum power of 50 kW, or 10 kW for stations that are authorized to operate in the 1605–1705 kHz band. Class D station. Operates either daytime, limited time, or unlimited time with nighttime power less than 0.25 kW and an equivalent RMS antenna field of less than 141 mV/m at 1 km. Class D stations shall operate with daytime powers not less than 0.25 kW nor more than 50 kW. Nighttime operations of Class D stations are not afforded protection and must protect all Class A and Class B operations during nighttime hours. New Class D stations that had not been previously licensed as Class B will not be authorized. Regional channel. One on which Class B and Class D stations may operate and serve primarily a principal center of population and the rural area contiguous thereto. Local channel. One on which stations operate unlimited time and serve primarily a community and the suburban and rural areas immediately contiguous thereto. Class C station. Operates on a local channel that is designed to render services only over a primary service area that may be reduced as a consequence of interference. The power shall not be less than 0.25 kW, nor more than 1 kW. Class C stations that are licensed to operate with 0.1 kW may continue to do so. (FCC Rules 73.21)

Frequency Modulation Broadcast (FM) The FM broadcast band consists of that portion of the radio frequency spectrum between 88 and 108 MHz. It is divided into 100 channels of 200 kHz each. For convenience, the frequencies available for FM broadcasting (including those assigned to noncommercial educational broadcasting) are given numerical designations that are shown in the accompanying CD-ROM. Different classes of FM station are provided so as to best serve the intended coverage area. Class A stations, for example, are authorized to serve a single community of limited geographic size. Class B stations have increased capability, which is suitable for metropolitan area coverage. Class C stations, on the other hand, are considered regional services and are granted increased power and antenna height accordingly. The maximum effective radiated power (ERP) in any direction, reference height above average terrain (HAAT), and distance to the class contour for each FM station class are listed in Table 22.1.1. TABLE 22.1.1 Maximum ERP, HAAT, and Distance to the Class Contour for Each FM station Class

Maximum ERP

Reference HAAT in meters (ft)

6 kW (7.8 dBk) 25 kW (14.0 dBk) 50 kW (17.0 dBk) 25 kW (14.0 dBk) 50 kW (17.0 dBk) 100 kW (20.0 dBk) 100 kW (20.0 dBk) 0.01 kW (–20.0 dBk)

100 (328) 100 (328) 150 (492) 100 (328) 150 (492) 299 (981) 600 (1968) N/A

Station class A B1 B C3 C2 C1 C D*

Class contour distance in kilometers (mi) 28 (17) 39 (24) 52 (32) 39 (24) 52 (32) 72 (45) 92 (57) N/A

*Secondary

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.5

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.5

Analog NTSC Television Broadcast (TV) A television channel is a band of frequencies 6 MHz wide in one of the television broadcast bands (VHF or UHF) and designated either by number or by the extreme lower and upper frequencies. Numerical designation of channels is given on the CD-ROM. Channels 2 through 13 are considered very high frequency (VHF), while channels 14 through 69 are considered ultrahigh frequency (UHF). The FCC Rules provide specific channel allocations to cities and towns so as to maximize spectrum reuse and efficiency. Unlike radio stations, television stations are not actually described by a class designation, but rather by channel number and service level; namely, a station would be considered low-band VHF if assigned to a channel from 2 through 6, high-band VHF if assigned to a channel from 7 through 13, or UHF if assigned to a channel from 14 through 69. A station is considered either full power or low power. The following shows the allowable effective radiated power and antenna height for full power stations, which also vary by geographic zone (see Zone Descriptions on the CD-ROM). Minimum power requirements require at least –10 dBk (100 W) horizontally polarized visual effective radiated power in any horizontal direction. No minimum antenna height above average terrain is specified. Maximum power may not exceed the permitted boundaries specified in the following formulas: 1. Channels 2 to 6 in Zone I: ERPMax = 102.57 – 33.24∗ Log10 (HAAT), and −10 dBk ≤ ERPMax ≤ 20 dBk 2. Channels 2 to 6 in Zones II and III: ERPMax = 67.57 – 17.08* Log 10 (HAAT) and 10 dBk ≤ ERPMax ≤ 20 dBk 3. Channels 7 to 13 in Zone I: ERPMax = 107.57 – 33.24* Log10 (HAAT), and −4.0 dBk ≤ ERPMax ≤ 25 dBk 4. Channels 7 to 13 in Zones II and III: ERPMax = 72.57 – 17.08* Log10 (HAAT), and 15 dBk ≤ ERPMax ≤ 25 dBk 5. Channels 14 to 69 in Zones I, II, and III: ERPMax = 84.57 – 17.08* Log10 (HAAT), and 27 dBk ≤ ERPMax ≤ 37 dBk *Where ERPMax = maximum effective radiated power in decibels above 1 kW (dBk) and HAAT = height above average terrain in meters. The boundaries specified are to be used to determine the maximum possible combination of antenna height and ERPdBk. When specifying an ERP less than that permitted by the lower boundary, any antenna HAAT can be used. Also, for values of antenna HAAT greater than 2300 m the maximum ERP is the lower limit specified for each equation.

Service Levels and Coverage Prediction In general, AM broadcast service is determined by field strength and interference protection levels. Interference protection is necessary since AM medium wave frequencies have the potential to travel great distances. FM and TV broadcast service, on the other hand, is determined mainly by field strength, since interference protection is typically not provided in the FCC Rules. FM and TV operate on frequencies that can be characterized as having “line-of-sight” propagation. Standard broadcast (AM) service levels are as follows: Mainland U.S. Class A stations operate with a power of 50 kW. These stations are afforded protection as follows (stations in Alaska are subject to slightly different standards). In the daytime, protection is provided to the 0.1 mV/m groundwave contour from stations on the same channel, and to the 0.5 mV/m groundwave contour from stations on adjacent channels. In the nighttime, such protection extends to the 0.5 mV/m—50 percent skywave contour from stations on the same channel.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.6

BROADCAST TRANSMISSION PRACTICE 22.6

BROADCAST AND CABLE SYSTEMS

Class B stations are stations that operate on clear and regional channels with powers not less than 0.25 kW nor more than 50 kW. Class B stations should be located so that the interference received from other stations will not limit the service area to a groundwave contour value greater than 2.0 mV/m nighttime and to the 0.5 mV/m groundwave contour daytime, which are the values for the mutual protection between this class of stations and other stations of the same class. Class C stations operate on local channels with powers not less than 0.25 kW, nor more than 1 kW. Such stations are normally protected to the daytime 0.5 mV/m contour. On local channels the separation required for the daytime protection shall also determine the nighttime separation. Class D stations operate on clear and regional channels with daytime powers of not less than 0.25 kW (or equivalent RMS field of 141 mV/m at 1 km if less than 0.25 kW) and not more than 50 kW. Class D stations that have previously received nighttime authority operate with powers of less than 0.25 kW (or equivalent RMS fields of less than 141 mV/m at 1 km) are not required to provide nighttime coverage and are not protected from interference during nighttime hours. When a station is already limited by interference from other stations to a contour value greater than that normally protected for its class, the individual received limits shall be the established standard for such station with respect to interference from each other station. The four classes of AM broadcast stations have in general three types of service area, i.e., primary, secondary, and intermittent. Class A stations render service to all three areas. Class B stations render service to a primary area but the secondary and intermittent service areas may be materially limited or destroyed due to interference from other stations, depending on the station assignments involved. Class C and Class D stations usually have only primary service areas. Interference from other stations may limit intermittent service areas and generally prevents any secondary service to those stations which operate at night. Consistent intermittent service may still be obtained in many cases depending on the station assignments involved. The groundwave signal strength required to render primary service is 2 mV/m for communities with populations of 2500 or more and 0.5 mV/m for communities with populations of less than 2500. (See FCC Rules Part 73.184 for curves showing distance to various groundwave field strength contours for different frequencies and ground conductivities.) A Class C station may be authorized to operate with a directional antenna during daytime hours provided the power is at least 0.25 kW. In computing the degrees of protection which such antenna will afford, the radiation produced by the directional antenna system must be assumed to be no less, in any direction, than that which would result from nondirectional operation using a single element of the directional array, with 0.25 kW. All classes of AM broadcast stations have primary service areas subject to limitation by fading and noise, and interference from other stations to the contours set out for each class of station. Secondary service is provided during nighttime hours in areas where the skywave field strength, 50 percent or more of the time, is 0.5 mV/m or greater. Satisfactory secondary service to cities is not considered possible unless the field strength of the skywave signal approaches or exceeds the value of the groundwave field strength that is required for primary service. Secondary service is subject to some interference and extensive fading whereas the primary service area of a station is subject to no objectionable interference or fading. Only Class A stations are assigned on the basis of rendering secondary service. (Standards have not been established for objectionable fading because of the relationship to receiver characteristics. Selective fading causes audio distortion and signal strength reduction below the noise level, objectionable characteristics inherent in many modern receivers. The automatic volume control circuits in the better designed receivers generally maintain the audio output at a sufficiently constant level to permit satisfactory reception during most fading conditions.) Intermittent service is rendered by the groundwave and begins at the outer boundary of the primary service area and extends to a distance where the signal strength decreases to a value that is too low to provide any service. This may be as low as a few mV/m in certain areas and as high as several millivolts per meter in other areas of high noise level, interference from other stations, or objectionable fading at night. The intermittent service area may vary widely from day to night and generally varies over shorter intervals of time. Only Class A stations are protected from interference from other stations to the intermittent service area. Broadcast stations are licensed to operate unlimited time, limited time, daytime, share time, and specified hours. New stations may be licensed only for unlimited time operation. Unlimited time stations may operate 24 h per day. Limited time stations (Class B only) may operate only during daytime hours until local sunset or

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.7

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.7

sunset at the nearest Class A station on the same frequency, whichever is easternmost. Daytime stations may operate only between local sunrise and local sunset. Share time stations, of which there are very few, operate on a frequency that is shared by another station in the same service area in accordance with a mutually agreed upon schedule which the FCC incorporates into the stations’ licenses. Stations authorized to operate during specified hours will have those hours enumerated in their FCC license.

FM Broadcast Service Levels FM broadcast service levels are determined by the class of FCC authorization. Within any class, coverage will be a function of effective radiated power (ERP) and antenna height above average terrain (HAAT). In general, higher ERP and higher HAAT result in larger coverage areas. The FCC table of allotments takes these factors into consideration to minimize the potential for interference or coverage overlap. Thus, FM broadcast stations are not generally protected from interference caused by the operation of other FM stations, so long as those stations operate in compliance with the FCC Rules. Class D (secondary) stations may not cause interference to primary stations. The primary protection against interference is based on a minimum allowable spacing between FM broadcast transmitter sites (see accompanying CD-ROM). Determination of predicted field strengths for FM broadcast stations is made using the FCC F(50/50) graphs. (Refer to FCC Rules 73.699 for the current graphs.) The 50 percent field strength is defined as that value exceeded for 50 percent of the time. The procedure to be used is as follows: The F(50/50) chart gives the estimated 50 percent field strengths exceeded at 50 percent of the locations in dB above 1 mV/m. The chart is based on an effective power radiated from a half-wave dipole antenna in free space, that produces an unattenuated field strength at 1 km of about 107 dB above 1 mV/m (221.4 mV/m). To use the chart for other ERP values, convert the ordinate scale by the appropriate adjustment in dB. For example, the ordinate scale for an ERP of 50 kW (17 dBk) should be adjusted by 17 dB and, therefore, a field strength of 40 dBu would be converted to 57 dBu. When predicting the distance to field strength contours, use the maximum ERP of the main radiated lobe in the pertinent azimuthal direction. When predicting field strengths over areas not in the plane of the maximum main lobe, use the ERP in the direction of such areas, determined by considering the appropriate vertical radiation pattern. The antenna height to be used with this chart is the height of the radiation center of the antenna above the average terrain along the radial in question. In determining the average elevation of the terrain, the elevations between 3 and 16 km from the antenna site are used. Profile graphs are drawn for eight radials beginning at the antenna site and extending out 16 km. The radials should be drawn for each 45° of azimuth starting with True North. At least one radial must include the principal community to be served even though it may be more than 16 km from the antenna site. However, in the event none of the evenly spaced radials include the principal community to be served, and one or more such radials are drawn in addition, these radials must not be used in computing the antenna height above average terrain. The profile graph for each radial is then plotted by contour intervals from 12 to 30 m, and, where the data permit, at least 50 points of elevation (generally uniformly spaced) should be used for each radial. In instances of very rugged terrain where the use of contour intervals of 30 m would result in several points in a short distance, 60- or 12-m contour intervals may be used for such distances. On the other hand, where the terrain is uniform or gently sloping, the smallest contour interval indicated on the topographic map should be used, although only relatively few points may be available. The profile graph should indicate the topography accurately for each radial, and the graphs should be plotted with the distance in kilometers as the abscissa and the elevation in meters above mean sea level as the ordinate. The profile graphs should indicate the source of the topographical data used. The graph should also show the elevation of the center of the radiating system. The graph may be plotted either on rectangular coordinate paper or on special paper that shows the curvature of the earth (commonly called 4/3 radius paper). It is not necessary to take the curvature of the earth into consideration in this procedure, as this factor is taken care of in the charts showing signal strengths. The average elevation of the 13 km distance between 3 and 16 km from the antenna site should then be determined from the profile graph for each radial. This may be obtained by averaging a large number of equally spaced points, by using a planimeter, or by obtaining the median elevation (that exceeded for 50 percent of the distance) in sectors and averaging those values. In the event that any of the radials encompass large amounts of water or non-U.S. territory, slightly different rules are applied.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.8

BROADCAST TRANSMISSION PRACTICE 22.8

BROADCAST AND CABLE SYSTEMS

Here is an example calculation of HAAT where all of the radials are over land within the United States and the heights above average terrain on the eight radials are as follows: Radial

Meters

0° 45° 90° 135° 180° 225° 270° 315°

120 255 185 90 –10 –85 40 85

The antenna height above terrain is computed as follows: (120 + 255 + 185 + 90 – 10 – 85 + 40 + 85)/8 = 85 m In cases where the terrain in one or more directions from the antenna site departs widely from the average elevation of the 3- to 16-km sector, the prediction method may indicate contour distances that are different from what may be expected in practice. For example, a mountain ridge may indicate the practical limit of service although the prediction method may indicate otherwise. In such cases, the prediction method should be followed, but a supplemental showing may be made concerning the contour distances as determined by other means. Such supplemental showings should describe the procedure used and should include sample calculations. Maps of predicted coverage should include both the coverage as predicted by the regular method and as predicted by a supplemental method. When measurements of an area are required, these should include the area obtained by the regular prediction method and the area obtained by the supplemental method. In directions where the terrain is such that antenna heights less than 30 m for the 3- to 16-km sector are obtained, an assumed height of 30 m should be used for the prediction of coverage. However, where the actual contour distances are critical factors, a supplemental showing of expected coverage must be included together with a description of the method used in predicting such coverage. In special cases, the FCC may require additional information as to terrain and coverage. The effect of terrain roughness on the predicted field strength of a signal at points distant from an FM transmitting antenna is assumed to depend on the magnitude of a terrain roughness factor which is a measure of the number of points along a profile segment where the elevations are exceeded by all points on the profile for 10 and 90 percent, respectively, of the length of the profile segment. This factor is simply added to the profile average height. No terrain roughness correction need be applied when all field strength values of interest are predicted to occur 10 km or less from the transmitting antenna. The FCC F(50/50) field strength charts were developed assuming a terrain roughness factor of 50 m, which is considered to be representative of average terrain in the United States. Where the roughness factor for a particular propagation path is found to depart appreciably from this value, a terrain roughness correction should be applied. Television Broadcast Service Levels In the authorization of TV stations, two field strength contours are represented in the FCC F(50,50) coverage chart. These are specified as Grade A and Grade B and indicate the approximate extent of coverage over average terrain in the absence of interference from other television stations. Under actual conditions, the true coverage may vary greatly from these estimates because the terrain over any specific path is likely to be different from the average terrain on which the field strength charts were based. The required field strength in decibels above 1 mV/m (dBu) for the Grade A and Grade B contours is as follows:

Channels 2–6 Channels 7–13 Channels 14–69

Grade A (dBu) 68 71 74

Grade B (dBu) 47 56 64

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.9

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.9

It should be realized that the F(50/50) curves when used for Channels 14 to 69 are not based on measured data at distances beyond about 48.3 km (30 mi). Theory would indicate that the field strengths for Channels 14 to 69 should decrease more rapidly with distance beyond the horizon than for Channels 2 to 6. For this reason, the curves should be used with appreciation of their limitations in estimating levels of field strength. Further, the actual extent of service will usually be less than indicated by these estimates due to interference from other stations. Because of these factors, the predicted field strength contours give no assurance of service to any specific percentage of receiver locations within the distances indicated. As in the case of FM broadcast stations, television stations are not guaranteed any protection from interference except for that arising out of minimum spacing requirements. In the special case of channel 6 television stations, additional protection is afforded by restrictions on the operation of noncommercial educational stations on the adjacent lower end of the FM broadcast band (FCC Rules 73.525).

Technical Standards In the United States, the Federal Communications Commission (FCC) establishes the technical standards with which all broadcast facilities must comply. Such standards have the goal of providing maximum service to the public and, therefore, address issues such as transmitter emissions purity, frequency, power, and modulation. These standards help ensure that each broadcaster radiates a signal that is compatible with the intended receivers without causing undo interference to other stations. Medium wave AM standard broadcasting definitions are included on the accompanying CD-ROM.

Amplitude Modulation Broadcasting The emissions of AM broadcast stations must not cause harmful interference. Such emissions may be measured using a properly operated and suitable swept-frequency RF spectrum analyzer using a peak hold duration of 10 min, no video filtering, and a 300-Hz resolution bandwidth, except that a wider resolution bandwidth may be employed above 11.5 kHz to detect transient emissions. Alternatively, other specialized receivers or monitors with appropriate characteristics may be used to determine compliance with the FCC Rules. Measurements made of the emissions of an operating station are to be made at ground level approximately 1 km from the center of the antenna system. FCC Rules require that emissions 10.2 to 20 kHz removed from the carrier must be attenuated at least 25 dB below the unmodulated carrier level, emissions 20 to 30 kHz removed from the carrier must be attenuated at least 35 dB below the unmodulated carrier level, emissions 30 to 60 kHz removed from the carrier must be attenuated at least [5 + 1 dB/kHz] below the unmodulated carrier level, and emissions between 60 and 75 kHz of the carrier frequency must be attenuated at least 65 dB below the unmodulated carrier level. Emissions removed by more than 75 kHz must be attenuated at least 43 + 10 Log (Power in watts) or 80 dB below the unmodulated carrier level, whichever is the lesser attenuation, except for transmitters having power less than 158 W, where the attenuation must be at least 65 dB below carrier level. The percentage of modulation is to be maintained at as high as is consistent with good quality of transmission and good broadcast service. Generally, the modulation should not be less than 85 percent on peaks of frequency recurrence, but where lower modulation levels may be required to avoid objectionable loudness or to maintain the dynamic range of the program material, the degree of modulation may be reduced to whatever level is necessary for this purpose, even though under such circumstances, the level may be substantially less than the level that produces peaks of frequency recurrence at a level of 85 percent. Maximum modulation levels for AM stations must not exceed 100 percent on negative peaks of frequent recurrence, or 125 percent on positive peaks at any time. AM stations transmitting stereophonic programs must not exceed the AM maximum stereophonic transmission signal modulation specifications of the stereophonic system in use. For AM stations transmitting telemetry signals for remote control or automatic transmission system operations, the amplitude of modulation of the carrier by the use of subaudible tones must not be higher than necessary to effect reliable and accurate data transmission and may not, in any case, exceed 6 percent. If a limiting or compression amplifier is employed to maintain modulation levels, precaution must be taken so as not to substantially alter the dynamic characteristics of programs.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.10

BROADCAST TRANSMISSION PRACTICE 22.10

BROADCAST AND CABLE SYSTEMS

AM Stereophonic Transmission The FCC has never issued a technical standard for AM stereo transmission, but rather chose to adapt a marketdriven approach wherein several competing systems were allowed to coexist. Only two systems survived this competition, Motorola’s C-Quam and Kahn’s ISB (independent sideband). The systems are incompatible, but both are mono compatible. C-Quam uses a quadrature phase modulation scheme wherein the main amplitude modulated carrier contains the left plus right channel information, thus insuring monaural receiver compatibility. The left minus right stereo information is transmitted on two subcarriers, phase modulated in quadrature, which are recovered in the C-Quam receiver and combined with the main carrier modulation in a matrix, resulting in separate left and right channels that are then amplified and fed to speakers. The ISB system essentially uses the redundant characteristics of normal full carrier, double sideband amplitude modulation to provide left and right channels. All of the information being transmitted by such a system can be recovered, using a suitable receiver, from either the upper or the lower sideband, with or without the carrier. So the ISB system simply transmits the left channel on the lower sideband and the right channel on the upper sideband. The carrier is not suppressed, so as to ensure compatibility with monaural receivers, which recover both channels additively to produce monaural audio. ISB receivers recover the sidebands independently, routing left and right channels to the corresponding amplifiers. However, AM stereo failed in the marketplace. Frequency Modulation Broadcasting Definitions are given on the accompanying CD-ROM. FM broadcast stations must maintain the bandwidth occupied by their emissions in accordance with the FCC specification detailed below. If harmful interference to other authorized stations occurs, the problem must be corrected promptly or the station may be forced to cease operation. 1. Any emission appearing on a frequency removed from the carrier by between 120 kHz and 240 kHz inclusive must be attenuated at least 25 dB below the level of the unmodulated carrier. Compliance with this requirement will be deemed to show the occupied bandwidth to be 240 kHz or less. 2. Any emission appearing on a frequency removed from the carrier by more than 240 kHz and up to and including 600 kHz must be attenuated at least 35 dB below the level of the unmodulated carrier. 3. Any emission appearing on a frequency removed from the carrier by more than 600 kHz must be attenuated at least 43 + 10 Log 10 (power, in watts) dB below the level of the unmodulated carrier, or 80 dB, whichever is the lesser attenuation. Preemphasis shall not be greater than the impedance-frequency characteristics of a series inductance resistance network having a time constant of 75 ms. The percentage of modulation is to be maintained at as high a level as is consistent with good quality of transmission and good broadcast service. Generally, the modulation should not be less than 85 percent on peaks of frequent recurrence, but where lower modulation levels may be required to avoid objectionable loudness or to maintain the dynamic range of the program material, the degree of modulation may be reduced to whatever level is necessary for this purpose, even though under such circumstances the level may be substantially less than the level that produces peaks of frequent recurrence at a level of 85 percent. Maximum modulation levels must not exceed 100 percent on peaks of frequent recurrence referenced to 75-kHz deviation. However, stations providing subsidiary communications services using subcarriers concurrently with the broadcasting of stereophonic or monophonic programs may increase the peak modulation deviation as follows: 1. The total peak modulation may be increased 0.5 percent for each 1.0 percent subcarrier injection modulation. 2. In no event may the modulation of the carrier exceed 110 percent (92.5-kHz peak deviation). If a limiting or compression amplifier is employed to maintain modulation levels, precaution must be taken so as not to substantially alter the dynamic characteristics of programs. Modern audio processing, often using digital circuitry, is able to significantly increase the density of modulation without adverse impairment to the audio.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.11

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.11

Multiplex Stereo FM broadcast stations may transmit stereophonic programming via a compatible multiplex system. This system uses a matrix to derive sum (left + right) and difference (left – right) signals from the applied audio. The monaural and stereophonic frequency response is 50–15,000 Hz. The sum signal is used to frequency modulate the main carrier. A monaural receiver will demodulate this main carrier into an audio signal containing both left and right signals, so no information will be lost. The difference signal is used to modulate a double-sideband stereo subcarrier operating at 38 kHz. This subcarrier, which must be suppressed to a level of less than 1 percent of the main carrier, is developed as a phase-locked second harmonic of the 19,000 ± 2 Hz stereo “pilot” signal, which frequency modulates the main carrier between the limits of 8 and 10 percent. This pilot signal is used by stereophonic receivers to enable stereo demodulating circuits and, almost always, some kind of stereo mode indicator. The stereo demodulator will provide a difference signal that can then be combined with the sum signal to regenerate the original left and right audio signals at the receiver output. Because of the “triangular” shape of the noise spectrum, the effective noise floor of the 38 kHz difference subcarrier is higher than that of the baseband audio frequencies. The result is that signal-to-noise ratios are somewhat poorer for a stereo transmission system than for a monaural system. (This is the reason for the use of a companded difference channel in the BTSC television stereo system.)

Subsidiary Communications Authorization (SCA) The FCC may issue a subsidiary communications authorization to an FM broadcast station so that station may provide limited types of subsidiary services on a multiplex basis. Permissible uses fall within the following categories: 1. Transmission of programs or data which are of a broadcast nature but which are of interest primarily to limited segments of the public wishing to subscribe thereto. Examples include background music, storecasting, detailed weather forecasting, real-time stock quotations, special time signals, and other material of a broadcast nature expressly designed and intended for business, professional, educational, religious, trade, labor, agricultural, or other groups. 2. Transmission of signals which are directly related to the operation of FM broadcast stations, i.e., relaying broadcast material to other FM and standard broadcast stations, remote cueing and order circuits, remote control telemetry, and similar uses. SCA operations may be conducted without time restriction so long as the main channel is programmed simultaneously. SCA subcarriers must be frequency modulated and are restricted to the range of 20 to 75 kHz, unless the station is also broadcasting stereophonically in which case the restriction is 53 to 75 kHz.

Digital Audio Radio Service (DARS) In 1997, the FCC finally awarded spectrum after a long rule-making process for DARS to two winners in an auction. XM Satellite Radio and Sirius Radio each inaugurated pay-to-listen subscription satellite-delivered services in 2001 with no free-to-air component, so they do not fit the definition of broadcasting and will not be discussed here. Currently there is no free digital radio broadcasting available in the United States. The FCC continues to consider various approaches to an analog-to-digital transition for the incumbent terrestrial radio broadcasters. However, no system has yet been deployed as of 2002.

Analog NTSC Television Broadcasting The accompanying CD-ROM includes definitions.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.12

BROADCAST TRANSMISSION PRACTICE 22.12

BROADCAST AND CABLE SYSTEMS

FIGURE 22.1.1 Visual amplitude characteristic.

Visual Transmission Systems Transmission standards are: 1. 2. 3. 4. 5. 6.

7. 8. 9. 10. 11.

The width of the television broadcast channel is 6 MHz. The visual carrier frequency is nominally 1.25 MHz above the lower boundary of the channel. The aural center frequency is 4.5 MHz higher than the visual carrier frequency. The visual transmission amplitude characteristic is generally required to conform to that shown in Fig. 22.1.1. The chrominance subcarrier frequency is 63/88 times precisely 5 MHz (3.57954545 . . . MHz). The tolerance is ±10 Hz, and the rate of frequency drift must not exceed 0.1 Hz per second (cycles per second squared). For monochrome and color transmissions the number of scanning lines per frame is 525, interlaced two to one in successive fields. The horizontal scanning frequency is 2/455 times the chrominance subcarrier frequency; this corresponds nominally to 15,750 Hz (with an actual value of 15,734.264 ± 0.044 Hz). The vertical scanning frequency is 2/525 times the horizontal scanning frequency; this corresponds nominally to 60 Hz (the actual value is 59.95 Hz). For monochrome transmissions only, the nominal values of line and field frequencies may be used. The aspect ratio of the transmitter television picture is four units horizontally to three units vertically. During active scanning intervals, the televised scene is scanned from left to right horizontally and from top to bottom vertically, at uniform velocities. A carrier is modulated within a single television channel for both picture and synchronizing signals. A decrease in initial light intensity will cause an increase in radiated power (negative transmission). The reference black level is always represented by a definite carrier level.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.13

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.13

12. The blanking level is transmitted at 75 ± 2.5 percent of the peak carrier level. 13. The reference white level of the luminance signal is 12.5 ± 2.5 percent of the peak carrier level. 14. It is customary to employ horizontal antenna polarization. However, circular or elliptical polarization may be employed if desired, in which case clockwise (right-hand) rotation, as defined in the IEEE Standard Definition. 42A65-3E2, and transmission of the horizontal and vertical components in time and space quadrature shall be used. For either omnidirectional or directional antennas the licensed effective radiated power of the vertically polarized component may not exceed the licensed effective radiated power of the horizontally polarized component. For directional antennas, the maximum effective radiated power of the vertically polarized component must not exceed the maximum effective radiated power of the horizontally polarized component in any specified horizontal or vertical direction. 15. The effective radiated power of the aural transmitter must not exceed 22 percent of the peak radiated power of the visual transmitter, but may be less. Typically, 10 percent aural power is used. 16. The peak-to-peak variation of transmitter output within one frame of video signal due to all causes including hum, noise, and low-frequency response, measured at both scanning synchronizing peak and blanking level, must not exceed 5 percent of the average scanning synchronizing peak signal amplitude. 17. The reference black level must be separated from the blanking level by setup interval, which shall be 7.5 ± 2.5 percent of the video range from blanking level to the reference white level. 18. For monochrome transmission, the transmitter output should vary in substantially inverse logarithmic relation to the brightness of the subject. 19. The color picture signal consists of a luminance component transmitted as amplitude modulation of the picture carrier and a simultaneous pair of chrominance components transmitted as the amplitude modulation sidebands of a pair of suppressed subcarriers in quadrature. 20. The radiated chrominance subcarrier must vanish on the reference white of a transmitted scene. Visual transmitter performance: 1. The field strength or voltage of the lower sideband must not be greater than 20 dB for a modulating frequency of 1.25 MHz or greater and in addition, for color, shall not be greater than −42 dB for a modulating frequency of 3.579545 MHz (the color subcarrier frequency). For both monochrome and color, the field strength or voltage of the upper sideband shall not be greater than –20 dB for a modulating frequency of 4.75 MHz or greater. For stations operating on Channels 15 to 69 and employing a transmitter delivering maximum peak visual power output of 1 kW or less, the field strength or voltage of the upper and lower sidebands may depart from the visual amplitude characteristic by no more than the following amounts:

• • • • • • •

2 dB at 0.5 MHz below visual carrier frequency 2 dB at 0.5 MHz above visual carrier frequency 2 dB at 1.25 MHz above visual carrier frequency 3 dB at 2.0 MHz above visual carrier frequency 6 dB at 3.0 MHz above visual carrier frequency 12 dB at 3.5 MHz above visual carrier frequency 8 dB at 3.58 MHz above visual carrier frequency (for color transmission only)

The field strength or voltage of the upper and lower sidebands must not exceed a level of –20 dB for a modulating frequency of 4.75 MHz or greater. If interference to the reception of other stations is caused by out-ofchannel lower sideband emission, the technical requirements applicable to stations operating on Channels 2 to 13 must be met. 2. The attenuation characteristics of the visual transmitter must be measured by application of a modulating signal to the transmitter input terminals in place of the normal composite television video signal. The signal applied will ordinarily be a composite signal composed of a synchronizing signal to establish peak output voltage plus a variable frequency sine wave voltage occupying the interval between synchronizing pulses. (The “synchronizing signal” referred to in this section means either a standard synchronizing wave form or

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.14

BROADCAST TRANSMISSION PRACTICE 22.14

BROADCAST AND CABLE SYSTEMS

any pulse that will properly set the peak.) The axis of the sine wave in the composite signal observed in the output monitor should be maintained at an amplitude 0.5 of the voltage at synchronizing peaks. The amplitude of the sine wave input must be held at a constant value. This constant value should be such that at no modulating frequency does the maximum excursion of the sine wave, observed in the composite output signal monitor, exceed the value 0.75 of peak output voltage. The amplitude of the 200-kHz sideband should be measured and designated 0 dB as a basis for comparison. The modulation signal frequency is then varied over the desired range and the field strength or signal voltage of the corresponding sidebands measured. As an alternate method of measuring in those cases in which the automatic d-c insertion can be replaced by manual control, the above characteristic may be taken by the use of a video sweep generator and without the use of pedestal synchronizing pulses. The d-c level should be set for midcharacteristic operation. 3. A sine wave, introduced at those terminals of the transmitter which are normally fed the composite color picture signal, should produce a radiated signal having an envelope delay, relative to the average envelope delay between 0.05 and 0.20 MHz, of 0 microsecond (ms) up to a frequency of 3.0 MHz; and then linearly decreasing to 4.18 MHz so as to be equal to 0.17 ms at 3.58 MHz. The tolerance on the envelope delay is ±0.05 ms at 3.58 MHz. The tolerance increases linearly to ± 0.1 ms down to 2.1 MHz, and remains at ± 0.1 ms down to 0.2 MHz. (Tolerances for the interval of 0.0 to 0.2 MHz are not specified at the present time.) The tolerance should also increase linearly to ± 0.1 ms at 4.18 MHz. 4. The change rate of the frequency of recurrence of the leading edges of the horizontal synchronizing signals should not be greater than 0.15 percent per second, the frequency to be determined by an averaging process carried out over a period of not less than 20, nor more than 100 lines, such lines not to include any portion of the blanking interval. Requirements applicable to both visual and aural transmitters are: 1. Automatic means must be provided in the visual transmitter to maintain the carrier frequency within ±1 kHz of the authorized frequency; automatic means must also be provided in the aural transmitter to maintain the carrier frequency 4.5 MHz above the actual visual carrier frequency within ±1 kHz. 2. The transmitters should be equipped with suitable indicating instruments for the determination of operating power and with other instruments necessary for proper adjustment, operation, and maintenance of the equipment. 3. Adequate provision must be made for varying the output power of the transmitters to compensate for excessive variations in line voltage or for other factors affecting the output power. 4. Adequate provisions should be provided in all component parts to avoid overheating at the rated maximum output powers. 5. The construction, installation, and operation of broadcast equipment is expected to conform with all applicable local, state, and federally imposed safety regulations and standards, enforcement of which is the responsibility of the issuing regulatory agency. 6. Spurious emissions, including radio frequency harmonics, must be maintained at as low a level as the state of the art permits. As measured at the output terminals of the transmitter (including harmonic filters, if required), all emissions removed in frequency in excess of a 3 MHz above or below the respective channel edge shall be attenuated no less than 60 dB below the visual transmitter power. 7. If a limiting or compression amplifier is used in conjunction with the aural transmitter, due care should be exercised because of the preemphasis in the transmitting system. 8. TV broadcast stations operating on Channel 14 and Channel 69 must take special precautions to avoid interference to adjacent spectrum land mobile radio service facilities. Vertical Interval Signals The interval beginning with line 17 and continuing through line 20 of the vertical blanking interval of each field may be used for the transmission of test signals, cue and control signals, and identification signals. Test signals may include signals designed to check the performance of the overall transmission system or its individual components. Cue and control signals shall be related to the operation of the TV broadcast station. Identification signals may be transmitted to identify the broadcast material for its source, and the date and time of its origination.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.15

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.15

Modulation of the television transmitter by such signals must be confined to the area between the reference white level and the blanking level, except where test signals include chrominance subcarrier frequencies, in which case positive excursions of chrominance components may exceed reference white, and negative excursions may extend into the synchronizing area. In no case may the modulation excursions produced by test signals extend beyond peak-of-sync, or to zero carrier level. The use of such signals must not result in significant degradation of the program transmission of the television broadcast station, nor produce emission outside of the frequency band occupied for normal program transmissions. Vertical interval signals may not be transmitted during the portion of each line devoted to horizontal blanking.

Closed Captioning Line 21, in each field, may be used for the transmission of a program related data signal, which, when decoded, provides a visual depiction of information simultaneously being presented on the aural channel (closed captions). Such data signal shall conform to the format described in Fig. 22.1.2 and may be transmitted during all periods of regular operation. On a space available basis, Line 21 in field 2 may also be used for text-mode data and extended data service information. The signals on fields 1 and 2 would ordinarily be distinct data streams, for example, to supply captions in different languages or at different reading levels. The data signal shall be coded using a non-return-to-zero (NRZ) format and shall employ standard ASCII 7 bit plus parity character codes. (Note: For more information on data formats and specific data packets, see EIA-608, “Line 21 Data Services for NTSC” available from the Electronics Industries Association.) At times when Line 21 is not being used to transmit a program-related data signal, data signals that are not program related may be transmitted, provided the same data format is used and the information to be displayed is of a broadcast nature.

FIGURE 22.1.2 Line 21 (closed captioning) data format.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.16

BROADCAST TRANSMISSION PRACTICE 22.16

BROADCAST AND CABLE SYSTEMS

Test Signals Vertical interval test signals include the following: Composite: Includes white bar to detect the line time distortions (such as over- or underdamped response) and 12.5 “T” pulse, which enables measurement of group (envelope) delay. Multiburst: Contains packets of various frequencies, which give an indication of amplitude versus frequency response. Modulated stairstep: Used to determine differential gain and phase response of a transmission system. Line 19, in each field, may be used only for the transmission of the Ghost-Cancelling Reference (GCR) signal described in OET Bulletin No. 68. The vertical interval reference (VIR) signal formerly permitted on Line 19 and described in Fig. 22.1.3, may be transmitted on any of Lines 10 to 16. The GCR signal, shown in Fig. 22.1.4, serves as a reference for adaptive filter delay networks in receivers and demodulators (see also Fig. 22.1.5). By correcting the network response to reconstitute a proper GCR in the receiver, active picture area is also corrected to reduce the effects of multipath reception and “ghosts.” Aural Transmission Systems Transmission Standards are as follows: 1. The modulating signal for the main channel shall consist of the sum of the stereophonic (biphonic, quadraphonic, and the like) input signals. 2. The instantaneous frequency of the baseband stereophonic subcarrier must at all times be within the range of 15 to 120 kHz. Either amplitude or frequency modulation of the stereophonic subcarrier may be used.

FIGURE 22.1.3 Vertical interval reference signal.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.17

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.17

FIGURE 22.1.4 The 525-line system ghost canceling reference signal (A). The spectrum of the 525-line system GCR (B).

3. One or more pilot subcarriers between 16 and 120 kHz may be used to switch a TV receiver between the stereophonic and monophonic reception modes or to activate a stereophonic audio indicator light, and one or more subcarriers between 15 and 120 kHz may be used for any other authorized purpose; except that stations employing the BTSC system of stereophonic sound transmission and audio processing may transmit a pilot subcarrier at 15,734 Hz ±2 Hz. Other methods of multiplex subcarrier or stereophonic aural transmission systems must limit energy at 15,734 Hz ±20 Hz, to no more than ±0.125 kHz aural carrier deviation.

FIGURE 22.1.5 Ghost canceling reference block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.18

BROADCAST TRANSMISSION PRACTICE 22.18

BROADCAST AND CABLE SYSTEMS

4. Aural baseband information above 120 kHz must be attenuated 40 dB referenced to 25 kHz main channel deviation of the aural carrier. 5. Multiplex subcarrier or stereophonic aural transmission systems must be capable of producing and must not exceed ±25 kHz main channel deviation of the aural carrier. 6. The arithmetic sum of nonmultiphonic baseband signals between 15 and 120 kHz must not exceed ±50 kHz deviation of the aural carrier. 7. Total modulation of the aural carrier must not exceed ±75 kHz. Aural transmitter performance: 1. Preemphasis should be employed as closely as practicable in accordance with the impedance-frequency characteristics of a series inductance-resistance network having a time constant of 75 ms. 2. If a limiting or compression amplifier is employed, care should be exercised in its connection in the circuit due to the use of preemphasis in the transmitting system. 3. For the aural transmitter of TV broadcast stations, a frequency deviation of ±25 kHz is defined as 100 percent modulation (except in the case of BTSC MTS). BTSC MTS (Stereo) The Broadcast Television Systems Committee (BTSC) Multichannel Television Sound (MTS) system was developed by the broadcast industry to provide a standardized approach to the addition of aural subcarriers to the television broadcast sound carrier for the purpose of transmitting multiple sound channels. The system provides for stereo, second audio program (SAP), and a professional cueing (PRO) channel. While the FCC has not enacted MTS as an absolute standard, it has promulgated regulations that protect the technical parameters so that the manufacturers, broadcasters, and the public can expect a consistent multichannel television sound service. The objectives of the BTSC system, which was adopted in 1984, are as follows: 1. 2. 3. 4. 5.

Compatibility with existing monophonic television receivers Simultaneous stereo and SAP capability Provision for other professional or station use subcarriers Noise reduction in both the stereo and SAP channels Increased aural carrier deviation to accommodate the new subcarriers

The BTSC MTS baseband spectrum is shown in Fig. 22.1.6. The BTSC monophonic (L + R sum) channel is identical to the normal non-BTSC baseband aural signal, ensuring compatibility with existing nonstereo television receivers. The stereo subcarrier is very similar to that

FIGURE 22.1.6 BTSC baseband spectrum.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.19

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.19

FIGURE 22.1.7 BTSC transmission block diagram.

used in FM broadcasting, except that it operates at a frequency of 31,468 Hz (twice the NTSC horizontal scanning rate of 15,734 Hz, or “H”). This subcarrier is a double-sideband, suppressed-carrier, amplitude modulated signal. An unmodulated pilot signal at 15,734 Hz, derived from the associated video horizontal synchronizing (H), is also transmitted, which is used by the receiver to reinsert, at its second harmonic (2H), the reference carrier for the stereo subcarrier. Special provisions are made in transmitting and receiving equipment to ensure that the H components from the visual signal do not interfere with the H sound pilot. In addition, the BTSC stereo subcarrier is modulated with a difference signal (L-R) that is companded, unlike FM broadcast, to provide improved signalto-noise performance amidst the noise, multipath distortion, and intercarrier “buzz” often generated by television transmitting and receiving systems. The compander circuits employ a combination of fixed preemphasis, “spectral” compression, and amplitude compression. Expander circuits found in BTSC-compatible receivers restore the proper dynamic range and stereo image while reducing undesirable artifacts of the transmission process. The Second Audio Program (SAP) channel is located at 5H (78,670 Hz) in the aural baseband. It is frequency modulated with audio, which may or may not be related to the main program. SAP audio is companded using the same system as the stereo subcarrier, which results in fairly good fidelity at the receiver. Typical uses include the simulcasting of a second language version of the main program, aural “captioning” of the main program to benefit the visually impaired, or transmission of other services, such as local weather information. The remaining BTSC subcarrier is called the Professional (PRO) channel and is centered at 6.5 H (102,271 Hz). It is frequency modulated with a very narrow deviation of only ±3 kHz. It may be used to transmit voice or data. A typical BTSC transmission block diagram is shown in Fig. 22.1.7 and a typical BTSC receiver-decoder is shown in Fig. 22.1.8.

FIGURE 22.1.8 BTSC receiver-decoder block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.20

BROADCAST TRANSMISSION PRACTICE 22.20

BROADCAST AND CABLE SYSTEMS

Because BTSC specifications are protected, but not mandated, by the FCC, other subcarriers are permitted to be transmitted in the aural baseband between 16 and 120 kHz. The only restriction is that they must not interfere with BTSC or monophonic receiver operation. Amplitude or frequency modulation carrying analog or digital signals may be used. BTSC stereo has several significant advantages over the simpler FM broadcast stereo system in that there is no “stereo noise penalty,” owing to the use of companding, and there is no required reduction in the allowable deviation of the monophonic signal. The FCC does not require stations transmitting BTSC MTS to meet any particular audio performance standards. Rather, a market-driven approach has been taken. Nevertheless, BTSC MTS is capable of excellent performance. The most significant impairments to BTSC performance result from crosstalk and incidental carrier pulse modulation (ICPM) in the transmission system. Crosstalk can occur due to nonlinearities in the TV aural exciter, high-power RF amplifier stages, combining networks, or the antenna system. ICPM results from phase modulation of the visual carrier which is independent of the aural carrier. Since most receivers employ intercarrier detection of the sound carrier, ICPM will cause a “buzz” in the sound unless kept to levels below 3° to 5°. Excessive ICPM will show up in the demodulated video as differential phase shift. ATSC Digital Television Systems ATSC digital transmission systems definitions are included on the accompanying CD-ROM. The ATSC Digital Television Standard is intended to provide a system capable of transmitting high-quality video and audio and ancillary data over a single 6-MHz channel. The system is specified to deliver 19.39 Mbps of throughput in a 6-MHz terrestrial broadcasting channel. This means that encoding a high-resolution video source containing about four times as much detail as that of conventional television (NTSC) resolution requires a bit-rate reduction by a factor of up to 50. To achieve this bit-rate reduction, the system is designed to exploit complex video and audio compression technology so that the resultant payload may be transmitted in a standard 6-MHz-wide television channel. The video encoding relies on MPEG-2 compression and audio is encoded using multichannel Dolby AC-3. Figure 22.1.9 depicts a typical ATSC encoder. The resultant transport stream is then randomized and transmitted using 8VSB (8 level, vestigial sideband) modulation. The transport specifications are shown in Table 22.1.2. Figure 22.1.10 shows the basic arrangement of an ATSC decoder. Figures 22.1.11 and 22.1.12 depict block diagrams of an 8VSB transmitter and receiver, respectively. The objective is to minimize the amount of data required to represent the video image sequence and its associated audio. The goal is to represent the video, audio, and data sources with as few bits as possible while preserving sufficient quality to recreate those sources as accurately as required for the particular application. While the transmission subsystems described in the ATSC Standard are designed specifically for terrestrial transmission, the objective is that the video, audio, and service multiplex/transport subsystems be useful in

FIGURE 22.1.9 Grand Alliance encoder.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.21

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

TABLE 22.1.2 ATSC Transmission Specifications Transmission parameter Channel bandwidth Excess bandwidth Symbol rate Bits per symbol Trellis FEC Reed-Solomon FEC Segmented length Segmented sync Frame sync Payload data rate NTSC co-channel rejection Pilot power contribution C/N threshold

Terrestrial mode 6 MHz 11.5 percent 10.76 Msymbols/s 3 2/3 rate (208,188) T = 10 836 symbols 4 symbols per segment 1 per 313 segments 19.3 Mbit/s NTSC rejection filter in receiver 0.3 dB 14.9 dB

FIGURE 22.1.10 ATSC decoder.

FIGURE 22.1.11 ATSC VSB transmitter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

22.21

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.22

BROADCAST TRANSMISSION PRACTICE 22.22

BROADCAST AND CABLE SYSTEMS

FIGURE 22.1.12 ATSC VSB receiver.

other applications, including those not presently identified. Conditional access provisions are included in the standard, to allow future subscription and data services. The entire process is so efficient that once the conversion of broadcast television from NTSC analog to ATSC digital is complete, part of the present television spectrum will be reallocated away from broadcast service. UHF television channels 52 to 69 will ultimately be assigned to other nonbroadcast services. While the FCC anticipates that this will occur in 2006, the roll out of ATSC digital television in the United States currently underway is not progressing as fast as originally forecast. Some pundits predict the transition will continue at least through 2015. Figure 22.1.9 shows a block diagram of the video encoder, while Fig. 22.1.10 shows the corresponding decoder. Figure 22.1.11 shows a block diagram of an ATSC VSB transmitter. Figure 22.1.12 shows a diagram of the ATSC VSB receiver. (See also Table 22.1.3.)

Short Wave Broadcasting Definitions are given on the accompanying CD-ROM. Transmission system performance: The construction, installation, and performance of an international broadcasting transmitter system shall be in accordance with good engineering practice. Spurious emissions must be effectively limited. 1. Any emission appearing on a frequency removed from the carrier frequency by between 6.4 kHz and 10 kHz inclusively, must be attenuated at least 25 dB below the level of the unmodulated carrier.

TABLE 22.1.3 ATSC Transport Specifications Transport parameter Multiplex technique Packet size Packet header Number of services Conditional access Error handling Prioritization System multiplex

MPEG-2 systems layer 188 bytes 4 bytes including sync Payload scrambled on service basis 4-bit continuity counter 1 bit/packet Multiple program capability described in PSI stream

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.23

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.23

2. Any emission appearing on a frequency removed from the carrier frequency by more than 10 kHz and up to and including 25 kHz must be attenuated at least 35 dB below the level of the unmodulated carrier. 3. Any emission appearing on a frequency removed from the carrier frequency by more than 25 kHz must be attenuated at least 80 dB below the level of the unmodulated carrier. 4. In the event spurious emissions from an international shortwave broadcasting transmitter cause harmful interference to other stations or services, additional steps may be required to eliminate the interference. The transmitter must be equipped with automatic frequency control apparatus so designed and constructed that it is capable of maintaining the operating frequency within 0.0015 percent of the assigned frequency. No international broadcast station will be authorized to install or be licensed for operation of transmitter equipment with a rated carrier power of less than 50 kW. The percentage of modulation should be maintained as high as possible consistent with good quality of transmission and good broadcast practice. In no case should it exceed 100 percent on positive or negative peaks of frequent recurrence. It also should not be less than 85 percent on peaks of frequency recurrence. The highest allowable modulating frequency is 5 kHz.

BROADCASTING EQUIPMENT In the following sections, general characteristics and typical examples of some current broadcasting equipment are presented. A brief theory of operation and a block diagram are provided.

Medium Wave AM Transmission Systems Medium wave transmitters are now generally solid state up to about 10 kW. Higher power transmitters may still employ one or more vacuum tube stages to achieve power levels of 100 kW or more. The maximum power level allowed in the United States is 50 kW, but other jurisdictions may authorize power levels exceeding 1000 kW. Such transmitters are usually fixed-tuned to operate on only one frequency. Figure 22.1.13 shows a block diagram of a Continental Electronics 419F 250 kW power tube transmitter. This transmitter employs low-level solid-state audio and RF stages which drive an unmodulated intermediatelevel tube RF power amplifier (IPA). The output of this IPA then drives a pair of large power amplifier tubes, arranged in the efficient “Doherty” configuration where modulation is applied with a 90° phase shift to the screens. One tube is considered to be the “carrier” tube and contributes most of the unmodulated power output. The other tube is called the “peak” tube and produces no power in the absence of modulation. On 100 percent negative modulation peaks, the carrier tube cuts off, reducing the transmitter output to zero. On 100 percent (or greater) positive peaks, the carrier tube and the peak tube both contribute approximately equal amounts of power. Efficiency of this design exceeds 60 percent. Figure 22.1.14 shows a block diagram for the Nautel AMPFET 5 W and AMPFET 10 kW solid-state transmitters. They employ a high-efficiency MOSFET pulse width modulation system which collector-modulates MOSFET RF amplifiers operating in class D to achieve an overall transmitter efficiency exceeding 72 percent. The transmitters differ mainly in the number of RF amplifier modules necessary to achieve rated power. Modern medium wave broadcast stations typically employ one or more vertical radiators operating over a ground system (counterpoise) comprised of multiple copper radials to provide either nondirectional or directional coverage patterns over the service area of interest. Directional arrays, consisting of one or more individual towers, are normally specified to protect the service areas of other stations by producing nulls in the directions to be protected. Antenna towers are normally at least one quarter wavelength in height, though shorter towers may be electrically lengthened by means of capacitive or inductive loading. Towers may be self-supporting or guyed. Each tower normally has an antenna tuning unit at its base, which serves to match the tower’s feed point impedance to that of the transmission line. Directional antenna systems will also have a phasing network that delivers power with the appropriate amplitude and phase shift to each tower so as to achieve the desired coverage pattern (Fig. 22.1.15).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.24

FIGURE 22.1.13 Continental Electronics Model 419F 250-kW shortwave transmitter.

BROADCAST TRANSMISSION PRACTICE

22.24 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

RF drive

- 10C - 01

Mod drive

Modulator driver (standby)

70 kHz ramp

Audio drive

Modulator driver (main)

Mod drive Mod drive alarm Mod drive

Monitor

RF drive level

FIGURE 22.1.14 Nautel AMPFET solid-state AM broadcast transmitter.

Audio input

70 kHz ramp

Audio drive

K2

K1

−70 V

Modulator

PA fault

Rectifier regulator

PA fault detector

PA fault detector

10

5

8

4

32V 3A AC

8-way for 10 kW

4-way for 5 kW

Voltage probe

Low voltage supply

Low voltage supply

Output filter

AC power supply

Commoner

+24 V +15 V −

Current probe

Number of 1.25 kW AMFET power blocks

Power amp (5) Power amp (6)

Power amp (3) Power amp (4)

Power amp (1) Power amp (2)

Power probe

SOA RF output Spark gap

11:19 AM

RF driver (standby)

OSC

Driver filter

PA fault detector

10/28/04

RF driver (main)

OSC

RF drive

1.25 kW power block

Christiansen_Sec_22.qxd Page 22.25

BROADCAST TRANSMISSION PRACTICE

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

22.25

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.26

BROADCAST TRANSMISSION PRACTICE 22.26

BROADCAST AND CABLE SYSTEMS

FIGURE 22.1.15 Directional antenna system block diagram.

FIGURE 22.1.16 FM exciter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.27

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.27

FIGURE 22.1.17 FM amplifier and control circuits.

Wire antennas, such as the flat tops and dipoles used in the early days of medium wave broadcasting, are not favored today, principally because of their high angles of radiation and poor ground wave efficiency. Such characteristics result in inferior local coverage and increased interference to distant stations, especially at night.

Frequency Modulation Transmission Systems FM transmitters consist of several main components, namely: 1. An exciter, Fig. 22.1.16, which converts the audio baseband to frequency-modulated RF. The exciter determines the key qualities of the transmitted signal. All modern exciters employ direct modulation. 2. Power amplifier stages, Fig. 22.1.17, which boost the output of the exciter to the full rated output of the transmitter. Such stages usually operate class C, a nonlinear though highly efficient mode. 3. Power supplies, which provide the necessary ac and dc voltages to operate the transmitter. 4. Control circuits, Fig. 22.1.17, which allow the transmitter to be turned on and off, the power to be adjusted, and protect the transmitter against damage from overload. 5. Low-pass filtering to suppress undesired harmonic frequencies. FM broadcast stations may use antennas with either horizontal or circular polarization. The advantage of circular polarization is that twice as much power may be used as compared to the horizontal polarization case, since the FCC allows the vertical component of radiated power to be equal to or less than the horizontal component. Most FM antennas are based on dipole or loop designs. Two or more elements are usually stacked to obtain increased gain at the expense of vertical bandwidth. Since it is possible to make very wide-band antennas relative to the width of a single FM channel, it is common in some areas to combine (or multiplex) several FM stations onto one physical antenna. This is done using large networks of three- or four-port combiners.

Analog NTSC Television Transmission Systems Modern VHF Television transmitters are solid state at all power levels to 60 kW and more. Reliability and performance have been enhanced by the elimination of heat-producing tubes and high voltage in the amplifier stages of the transmitter. Instead, large amounts of RF power are produced by combining many

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.28

FIGURE 22.1.18 Larcan 30-kW solid-state VHF TV transmitter.

BROADCAST TRANSMISSION PRACTICE

22.28 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.29

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.29

TABLE 22.1.4 Typical Solid-State VHF Transmitter Specifications VISUAL Type of emission .......................................................................................................................... 5M45C3F Frequency range (channels 2 to 6) ...........................................................................................54 to 88 MHz (channels 7 to13) .................................................................................... 174 to 216 MHz Rated power output..............................................................................................................250 W peak sync Output impedance.............................................................................................................................. 50 ohms Input impedance: video......................................................................................................................75 ohms Input level..................................................................................................................................0.5 to 2 V p-p Regulation of output: black to white picture..............................................................................................1% Variation of output: over one frame ......................................................................................................... 2% Modulation capability..............................................................................................................................1.0% LF linearity.............................................................................................................................................. 4% Variation in frequency response with brightness1............................................................................ ±0.3 dB Carrier frequency stability2............................................................................................................ ±200 Hz Differential phase: reference burst (12.5 to 75 percent modulation, subcarrier modulation 10%)3 ......................................................... ±1° Differential gain: (12.5 to 75 percent modulation, subcarrier modulation 10%) ............................................................ 3% K factor: 2T pulse.................................................................................................................................... 1% Incidental carrier phase modulation...................................................................................................... ±1° Signal-to-noise ratio: RMS below sync level.................................................................................. –55 dB Harmonic radiation.............................................................................................................................. –80 dB Spurious radiation.............................................................................................................................. –60 dB Envelope delay: 0.2 to 2.0 MHz.............................................................................................................................. ±40 ns at 3.58 MHz.................................................................................................................................. ±30 ns at 4.18 MHz .............................................................................................................................. ±60 ns Sideband response: Carrier +200 kHz......................................................................................................................... 0 dB ref Carrier –0.5 MHz to + 4.18 MHz...................................................................................... +0.5, –1 dB Carrier +4.75 MHz and higher........................................................................................ –35 dB or better Carrier –1.25 MHz and lower........................................................................................ –20 dB or better Carrier –3.58 MHz......................................................................................................... –42 dB or better Carrier –4.5 MHz............................................................................................................ –30 dB or better Blanking level variation: black to white picture................................................................................... 1.5% Intermodulation distortion.................................................................................................................. –52 dB Visual footnotes: 1With respect to response at midcharacteristic. Measured at 85 percent and 20 percent modulation. 2Maximum variation over 30 days, at an ambient temperature of 0 to 45°C. 3Maximum variation with respect to burst.

smaller stages, each of which operates on low voltage and comparatively high current. The Larcan 30-kW transmitter shown in Fig. 22.1.18 is typical. The transmitter has been designed to operate conservatively at 30 kW peak sync visual power and 3 kW average aural power. Typical solid-state VHF transmitter specifications are given in Table 22.1.4. Ruggedly constructed, this transmitter is modular in format. Many of the modules are standard, thereby providing a very high degree of commonality in systems using transmitters of various power ratings and frequencies. The simplicity of design and use of standard, readily available components enhances serviceability. All important parameters are monitored and can be displayed on the meters built into the exciter and amplifier. This equipment is suitable for automatic or remote-control operation.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.30

BROADCAST TRANSMISSION PRACTICE 22.30

BROADCAST AND CABLE SYSTEMS

The exciter-modulator provides a fully processed, precorrected, on-channel signal. Low-level IF modulation and surface acoustic wave (SAW) filter technology are employed to ensure superior color performance and simplicity of operation. The exciter (model TEC-3V) is modular in construction and BTSC stereo compatible. In addition to the normal audio input, the exciter also has two wideband inputs to accept the signals from stereo, separate audio program (SAP), and pro-channel generators. A built-in incidental carrier phase modulation (ICPM) corrector ensures optimum performance when transmitting multichannel sound or teletext. All routine on-air adjustments are screwdriver adjustable from the front panel, with an extender module provided for servicing and more extensive adjustments. The precorrection circuits, aural preemphasis, and SAW filters can be switched out of circuit by front panel locking-type toggle switches. These circuits are all unity-gain, thus no level adjustments are required when switching in and out of circuit. Each IF and RF module has a BNC test point located on the front panel. These test points are fed from a directional coupler at the module output, and have no effect on the module output when connected to a 50-Ω load. The visual output of the exciter is fed to four broadband solid-state 9 kW amplifiers that require no tuning or adjustment. All of the modules in each amplifier are operated well below their maximum ratings. It can be seen from the RF flow diagram that there are four stages of amplification. The preamplification stages are high gain, broadband, thin-film integrated circuit amplifiers operating class A. The IPA stages also operate class A and consist of two transistors whose outputs are combined by 3-dB quadrature couplers. The driver stage consists of two amplifiers operating class A, again with their outputs combined by a 3-dB quadrature coupler. Each device is a pair of push–pull FETs in a single case. The four visual final stages each consist of six modules whose outputs are combined in a six-way stripline combiner. This method of combining provides excellent isolation between modules and the failure of one or more modules will not affect the performance or reliability of the others. To combine the four final stages, 3-dB couplers are used with the output fed to an external diplexer. Each output module has AGC control, VSWR protection, and a test jack for monitoring purposes. The aural chain of the transmitter operates in a similar manner. The control circuitry in this transmitter is extremely simple because no interlocking or sequential starting of power supplies is required. The transmitter is basically built as two parallel halves; i.e., amplifiers 1-1A and 1-1B, complete with their own power supplies and protection circuits, can be operated independently or together with amplifiers 1-2A and 1-2B. The transmitter control panel, situated below the visual and aural IPA units in cabinet 2, enables single or joint control to be made. All control, status, and telemetry signals are brought out on a 25-pin D connector. For UHF broadcasters, transmitters have been the subject of ongoing development work ever since the inception of UHF stations in the 1950s. When the UHF band was initially allocated for television service, most stations signed on-the-air with lower power tetrode and traveling wave tube transmitters. Later, klystrons were developed that allowed generation of much higher power at UHF frequencies. Klystrons have the advantage of no delicate grid or large anode structure, while providing extremely high gain. The problem was, as stations increased power with klystrons, very complex and expensive transmitters were required. Power efficiency, while better than a tetrode, was fairly low. In the late 1970s, as oil prices rose and cost of energy became an important factor, stations began to employ klystron pulsers, which boosted efficiency somewhat, to cut their electrical usage. At the same time, many development projects were undertaken to improve the klystron’s fundamental efficiency. The MSDC (multistage, depressed collector) klystron was one result. By using multiple collectors, operating at graduated voltages, more electrons could be collected and efficiency increased. Sometime later, tube manufacturer Eimac developed a new type of gridded klystron, which they called a Klystrode inductive output tube (IOT). The IOT achieved even better efficiency than the MSDC klystron without the complication of multiple high-voltage power supplies. Almost all modern high-power UHF transmitters employ IOT power amplifiers. Figure 22.1.19 shows the block diagram of a typical UHF transmitter. Table 22.1.5 gives typical UHF TV transmitter characteristics. Antenna designs vary depending on whether the intended use is VHF or UHF television. It is generally not possible to use one antenna to cover both UHF and VHF bands, nor even to cover the entirety of any one band with a single antenna. Antennas for television broadcast usually provide an omnidirectional antenna pattern, but directional antennas are sometimes employed at UHF stations. VHF antennas will use designs similar to those shown in Fig. 22.1.20. Antennas similar to those shown in Fig. 22.1.21 are often used in UHF applications.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

O DBH

Modulator 6975-1200

Before saw

If in

If out Linear

Intermod

If corrector

ULD

ULD

Linear

6V AHP Exciter 3153-1675

Mixer

50 DHN Load

Fixed delay

e-way splitter

Phase /line strecher

FIGURE 22.1.19 Larcan/TTC redundant UHF transmitter.

Base band audio

Multiplex if out

Before saw

If in

If out

Exciter 3153-1675 6V AHP

Mixer

3 Phase mains in

50 DHN load

3 Phase mains in

Soft start

Driver IPA 3153-1690

Soft start

Driver IPA 3153-1690

Beam supply

Beam supply

Tube blower

Power AHP IOT 3153-1190

Tube blower

Power AHP IOT 3153-1190

System load

Optional switchless combiner

4 Port switch (Standard)

Harmonic filter

Notch filter

11:19 AM

Base band video

O DBH

Modulator 6975-1200

Intermod

If corrector

10/28/04

Base band audio

Base band video

Multiplex if out

Antenna

Christiansen_Sec_22.qxd Page 22.31

BROADCAST TRANSMISSION PRACTICE

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

22.31

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.32

BROADCAST TRANSMISSION PRACTICE 22.32

BROADCAST AND CABLE SYSTEMS

TABLE 22.1.5 Typical UHF TV Transmitter Characteristics Vision performance Frequency range: 470–806 MHz Output impedance: 50 or 75 Ω Video output: 75 Ω, 36 dB return loss Sideband response: –3.58 MHz –42 dB or better –1.25 MHz and below –20 dB or better Carrier to –0.5 MHz +0.5 dB, –1.0 dB Carrier to +4.0 mHz ±0.5 dB +4.00 to +4.18 MHz +0.5 dB. –2.0 dB +4.75 MHz and above –40 dB or better Variation in response with brightness: ±0.75 dB Frequency stability: ±250 Hz (maximum—30 days) Modulation capability: 100% Differential gain: 0.5 dB Differential phase: 3° Low frequency nonlinearity: 1.0 dB Envelope delay: 0.2–2.0 MHz ±40 nS 3.58 MHz ±30 nS 4.18 MHz ±60 nS Regulation of output: 3% Variation of output: 2% AM noise (RMS): –55 dB K-factors: 2T––2% 12.5T––3% Spurious and harmonics: –60 dB Incidental carrier phase modulation: ±1.5° Common amplification applications in band intermodulation: –60 dB Stereo pilot carrier protection: Full compliance with FCC specification 73.682(c)(3)

Sound performance Intercarrier (+4.5 MHz) Frequency stability: ±15 Hz. phase locked to video line Monaural input: 600 Ω, balanced Broadband input (2): 75 Ω, unbalanced Monaural performance (±25 kHz deviation) Frequency response: ±0.5 dB, 50 Hz to 15 kHz Preemphasis: 75 mS or flat Distortion (with deemphasis): 0.5% FM noise: –60 dB AM noise: –55 dB AM synchronous noise: –40 dB Stereo performance Frequency response: ±0.5 dB, 50 Hz to 120 KHz FM noise: –70 dB, ±75 KHz reference Distortion (THD): 0.5% Distortion (IMD): 0.5% Stereo separation (equivalent mode): 40 dB BTSC pilot protection: –45 dB or better (Note: Patent pending noise reduction circuit required in common amplification systems.) General performance Operating ambient temperature: 0 to 50°C Altitude: Sea level to 7,500 FT: Consult factor for other altitudes. Relative humidity: 0 to 95% noncondensing

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:19 AM

Page 22.33

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

FIGURE 22.1.20 Omnidirectional antenna configurations.

FIGURE 22.1.21 Traveling wave antennas.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

22.33

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.34

BROADCAST TRANSMISSION PRACTICE 22.34

BROADCAST AND CABLE SYSTEMS

ATSC Digital Television Transmission Systems Digital television transmitters are similar to combined-amplification analog transmitters, except that the modulator/exciter accepts a single 19.39 Mbps digital bit stream, usually via a SMPTE 310 interface. The following stages of amplification are tuned to pass the 6-MHz wide, broadband noise-like spectrum of the ATSC 8VSBmodulated signal. Following the final stage(s) of amplification, a constant-impedance mask filter is normally employed to provide the stringent band-edge rejection levels required by the FCC Rules. Digital television transmitters are typically rated by average power, instead of the peak power ratings normally ascribed to analog transmitters. Solid-state ATSC transmitters on either VHF or UHF channels may achieve power levels of up to 30 kW. UHF digital stations may require higher transmitter output power levels, up to 100 kW or more, which may currently only be economically produced by IOT transmitters, in the same manner as is done for high-power NTSC UHF transmitters. Such transmitters are typically assembled from multiple IOT amplifiers, each producing up to 25 kW of average digital power, which are electrically connected by “magic tee” combiners. Figure 22.1.22 shows a block diagram of the Harris CD3200P2 two-IOT ATSC digital transmitter, capable of 42 kW average power. The broadband nature of the ATSC signal means that the venerable klystron, still used in some analog UHF television transmitters, is not suitable because of its relatively narrow bandwidth capability. However, that same broadband characteristic has enabled tube manufacturers to apply the multistage depressed-collector (MSDC) technology, originally developed around the klystron, to the IOT. This technology, which consists of multiple electron collectors operating at progressively lower beam voltages, results in a substantial increase in the efficiency of the conventional IOT. Figure 22.1.23 shows the internal structure of a typical MSDC IOT manufactured by L-3 communications.

Test load

RF sample

SMPTE 310 M 19.39 Mbps

Exciter A IPA

Mask filter

IOT

Linearizer Auto c/o

AGC

Power control

Magic tee combiner SMPTE 310 M 19.39 Mbps

Exciter B

IPA

Mask filter

IOT

Linearizer Transmitter control

GUl

Reject load

AGC

Controller

FIGURE 22.1.22 Harris Sigma ATSC transmitter block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.35

BROADCAST TRANSMISSION PRACTICE BROADCAST TRANSMISSION PRACTICE

22.35

FIGURE 22.1.23 MSDC IOT structure, typical.

BIBLIOGRAPHY Advanced Television Systems Committee, Washington, D.C., ATSC Standard: Digital Television Standard, Revision B, 7 August 2001. Advanced Television Systems Committee, Washington, D.C., Guide to the Use of the ATSC Digital Television Standard, 4 October 1995. Federal Communications Commission, Fourth Report and Order Re: Advanced Television Systems and Their Impact upon the Existing Television Broadcast Service (Adopted), December 24, 1996. Public Broadcasting Service, Washington, D.C., Advanced Television Transmission, 1995. Silbergleid, M., and M. Pescatore (eds.), Guide to Digital Television, 2nd ed., 1999. National Association of Broadcasters Washington, D.C., Engineering Handbook, 9th ed., 1999. Television Engineering Handbook, revised edition, McGraw-Hill, 1992.

ON THE CD-ROM: Frequency Channel Assignments for FM Broadcasting Numeral Designation of Television Channels

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.36

BROADCAST TRANSMISSION PRACTICE 22.36

BROADCAST AND CABLE SYSTEMS

Zone Designations for Television Broadcasting Minimum Distance Separation Requirements for FM Broadcast Transmitters in KM (MI) Medium Wave AM Standard Broadcasting Definitions Frequency Modulation Broadcasting Definitions Analog Television (NTSC) Definitions ATSC Digital Transmission System Definitions Short Wave Broadcasting Definitions

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.37

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 22.2

AM AND FM BROADCAST RECEIVERS Lee H. Hoke, Jr.

AM RECEIVERS: GENERAL CONSIDERATIONS AM broadcast receivers are designed to receive amplitude-modulated signals between 530 and 1700 kHz (566 to 177 m wavelength), with channel assignments spaced 10 kHz. To enhance ground-wave propagation the radiated signals are transmitted with the electric field vertically polarized. AM broadcast transmitters are classified, according to the input power supplied to the power amplifier, from a few hundred watts up to 50 kW. The operating range of the ground-wave signal, in areas where the ground conductivity is high, is up to 200 mi for 50-kW transmitters. During the day the operating range is limited to the ground-wave coverage. At night, refraction of the radiated waves by the ionosphere causes the waves to be channeled between the ionosphere and the earth, resulting in sporadic coverage over many thousands of miles. The nighttime interference levels thus produced impose a restriction on the number of operating channels that can be used at night. The signal-selection system is required to have a constant bandwidth (approximately 10 kHz), continuously adjustable over a 3:1 range of carrier frequencies. The difficulty of designing cascaded tuned rf amplifiers of this type has resulted in the universal use of the superheterodyne principle in broadcast receivers. A block diagram of a typical design is shown in Fig. 22.2.1. In this figure the signal is supplied by a vertical monopole (whip) antenna in automobile radio receivers or by a ferrite-rod loop antenna in portable and console receivers. An rf amplifier is used in most automobile designs but not in small portable models. In some receivers the local oscillator is combined with the mixer, which simplifies the rf portion of the receiver. An intermediate frequency of 455 kHz is used in portable and console receivers, while 262.5 kHz is common in automobile radio designs. Diode detectors have been used in discrete component designs for detection of the i.f. signal. Push-pull class B audio-power amplifiers are used to minimize current drain. A moving coil permanent-magnet speaker is used as the output transducer. A potential major improvement in AM radio sound quality occurred in 1990 with the adoption of the Amax voluntary standard by the National Radio Systems Committee (NRSC)1 and implementation by most radio stations. This standard established a receiver audio frequency response of not less than 50 to 7500 Hz, with limits of +1.5 dB, –3.0 dB referenced to 0 dB at 400 Hz. The receiver shall have less than 2 percent total harmonic distortion plus noise (THD+N) and shall have a deemphasis curve that complements the preemphasis now added to the AM broadcast modulation. Attenuation at the 10-kHz adjacent frequencies shall be at least 20 dB. In addition, the receiver must be equipped with a noise blanker circuit (impulse noise suppressor) and have means for connecting an external antenna. To many listeners, AM and AM-stereo receivers designed to these specifications gave performance rivaling FM receivers of equivalent class in tests conducted at the 1992 NAB convention.

22.37 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.38

AM AND FM BROADCAST RECEIVERS 22.38

BROADCAST AND CABLE SYSTEMS

FIGURE 22.2.1 Block diagram typical of AM receivers.

Design Categories AM receiver designs currently fall into three categories: Portable Battery-Powered Receivers without External Power Supply. These units vary in size from small pocket radios operating on penlite cells to larger hand-carried units using D cells for power. The power output in the larger portable units is about 250 mW. Console and Component Type AM Receivers Powered by the Power Line. These units are usually a part of an AM-FM receiver, with high audio-power output capability. The power output ranges from several watts to more than 100 W. Most audio systems use push-pull class B operation. Most such systems are equipped with two amplifier systems for FM stereo operation. Automotive Receivers Operated on the 12-V Battery-Generator System of the Automobile. The primary current used in transistorized receivers usually does not exceed 1 A. Because of operation in the high ambient noise of the vehicle, the power output is relatively high (2 to 20 W). Sensitivity and Service Areas The required sensitivity is governed by the expected operating field strengths. Typical field strengths for primary (ground-wave) and secondary (sky-wave) service are as follows: Field strength, mV/m Primary service: Central urban areas (factory areas) Residential urban areas Rural areas Secondary service

10–50 2–10 0.1–1.0 Areas where sky-wave signals > 0.5 mV/m at least 50 percent of the time

Co-channel protection is provided for signals exceeding 0.5 mV/m. The receiver sensitivity and antenna system should be adjusted to provide usable outputs with signals of the order of 0.1 mV/m if the receiver is to be used over the maximum coverage area of the transmitter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.39

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.39

The required circuit sensitivity is controlled by the efficiency of the antenna system. A car-radio receiver vertical antenna is adjustable to about 1 m in length. Since the shortest wavelengths are of the order of 200 m, the antenna can be treated as a nonresonant short-capacitive antenna. The open-circuit voltage of such a short monopole antenna is Ea = 0.5leff Ef mV where leff is the effective length of antenna (m) and Ef is the field strength (mV/m). The radiation resistance of the short monopole is small compared with the circuit resistance of the receiver input circuit, but the antenna is not matched to the input impedance since matching is not critical, adequate antenna voltage being available at the minimum field strength (0.1 mV/m) needed to override noise. The carradio antenna is coupled to the receiver by shielded cable. This, with the receiver input capacitance, forms a capacitive divider, reducing the antenna voltage applied to the receiver. To ensure adequate operation the receiver should offer 20 dB signal-to-noise ratio when 10 to 15 mV is applied to the input terminals. Portable and console receivers use much shorter built-in antennas, usually horizontally polarized coils wound on ferrite rods. The magnetic antenna can be shielded from electric field interference. Although the effective length of a ferrite rod is shorter than that of a whip antenna, the higher Q of the ferrite rod and coil could provide approximately the same voltage to the receiver. The unloaded Q of a typical ferrite-rod antenna coil is of the order of 200. The voltage at the terminals of the antenna coils is QEa. Selectivity Channels are assigned in the broadcast band at intervals of 10 kHz, but adjacent channels are not assigned in the same service area. In superheterodyne receivers, selectivity is required not only against interference from adjacent channels but also to protect against image and direct i.f. signal interference. The primary adjacent-channel selectivity is provided by the i.f. stages, whereas image and direct i.f. selectivity must be provided by the rf circuits. In receivers using a ferrite-rod antenna and no rf stage, the rf selectivity is provided by the antenna. High Q in the antenna coil thus not only provides adequate signal to override the mixer noise but also protects against image and i.f. interference. With a Q of 200 the image rejection at 1400 kHz is about 40 dB, while the direct i.f. rejection at 600 kHz is about 24 dB. With an rf stage added, the image rejection is about 50 dB and the i.f. rejection about 46 dB. Since car-radio receivers are subjected to an extreme dynamic FIGURE 22.2.2 Typical selectivity curve range of signal levels, the selectivity must be slightly greater to accomof an AM receiver using a ferrite-rod antenmodate strong signal conditions. The image rejection at 1400 kHz is na without rf amplifier. typically 58 dB in spite of the lower i.f. frequency; the i.f. rejection is typically 50 dB, and the adjacent-channel selectivity is about 20 dB. Figure 22.2.2 shows the overall response of a typical portable receiver using a ferrite rod without an rf amplifier.

High-Signal Interference When strong signals are present, the distortion in the rf and i.f. amplification stages can generate additional interfering signals. The transfer characteristic of an amplifier system can be expressed in a power series as Eout = Gl Ein + G2(Ein)2 + G3(Ein)3 + … + Gn(Ein)n where Eout = output voltage Ein = input voltage (same units) G1, G2, . . . , Gn = voltage gains of successive amplifier stages

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.40

AM AND FM BROADCAST RECEIVERS 22.40

BROADCAST AND CABLE SYSTEMS

When the input signal consists of two or more modulated rf signals, Ein becomes Ein = e1 cos w1t + e2 cos w2t where e1 = E1[1 + Ea1(t)/E1],

e2 = E2[1 + Ea2(t)/E2]

where E1 = signal 1 carrier E2 = signal 2 carrier Ea1(t) = audio modulation first signal Ea2(t) = audio modulation second signal I.F. Beat. When two strong signals are applied to the amplifier with carrier frequencies separated by a difference equal to the intermediate frequency, a difference-frequency signal appears, which is independent of the local oscillator frequency. Because of the wide frequency spacing, interference of this kind can take place only in the mixer or rf amplifier and only with strong signals (signal strengths of several volts per meter). These signals are derived from the G2 term: Eif.beat = G2e1e2 cos (w1 – w2)t where e1e2 and w1w2 are the respective amplitude and angular frequencies. Cross Modulation. When a strong interfering signal is present, the modulation on the interfering signal can be transferred to the desired signal by third-order distortion components. This type of interference does not occur at a critical frequency of the interfering signal, provided that it is close enough to the desired signal frequency not to be attenuated by the selectivity of the receiver. This type of distortion can take place in the rf, mixer, or i.f. stages of the receiver. These signals are derived from the G3 terms: Ecrossmod = G3(e2)2e1/4 cos w1t The cross modulation is proportional to the square of the strength e2 of the interfering signal. Intermodulation. Another type of interference due to third-order distortion is caused by two interfering carriers. When these signals are so spaced from the desired signal that the first signal is arbitrarily spaced by ∆f and the second signal is 2∆f, third-order distortion can create a signal on the desired carrier frequency. These signals are derived from the G3 terms: Eintermod = G3e21e2 /4 cos (2w1 − w2)t The interference is proportional to the square of the amplitude of the closer carrier times the amplitude of the farther carrier. Intermodulation is sometimes masked by cross modulation; it occurs only when the e2 signal is stronger than the desired carrier after attenuation by the rf selectivity of the receiver. Harmonic Distortion. Harmonic-distortion interference usually arises from harmonics generated by the detector following the i.f. amplifier, which radiated back to the tuner input.

Choice of Intermediate Frequency Two intermediate frequencies are used in broadcast receivers: 455 kHz and 262.5 kHz. The 455-kHz i.f. has an image at 1450 kHz when the receiver is tuned to 540 kHz, thus allowing good image rejection with simple selective circuits in the rf stage. At 540 kHz the selectivity must be sufficient to prevent i.f. feedthrough since the receiver is particularly sensitive at i.f. frequencies because converter circuits typically have higher gain at

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.41

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.41

i.f. than at the converted carrier frequency. The choice of the higher i.f. frequency also makes i.f. beat interference less likely. The second harmonic falls at 910 kHz and the third harmonic at 1365 kHz. The 262.5 kHz i.f. has a lower limit of image frequency at 1065 kHz, which requires somewhat more rf selectivity than is needed when the higher i.f. is used. The second harmonic of the 262.5-kHz frequency falls below the broadcast band (at 525 kHz) and hence does not interfere. On the other hand, there are more higher-order responses in the passband (787.5, 1050, 1312.5, and 1575 kHz). Sensitivity to i.f. feedthrough when the receiver is tuned to 540 kHz is greatly reduced by the use of the lower i.f. frequency.

Circuit Implementation Although hobbyists still build AM broadcast receivers using multiple stages of discrete components as described in previous editions of this handbook, all present-day commercially built receivers are designed with ICs which require only a handful of additional components to complete their function. Figure 22.2.3 shows one such implementation with its associated peripheral components. This circuit is intended for shirt-pocket use and drives audio earphones.2,3 In other applications, a simple mono audio amplifier IC can be added to provide table-top or clock-radio application. The design follows the classic superheterodyne block diagram of mixer, oscillator, intermediate frequency amplifier, detector, and audio amplifier. In designs intended for automotive radio, an RF amplifier stage precedes the mixer and a much higher power amplifier is included. Station selection is provided by an RF L-C tank circuit and an oscillator L-C tank circuit. Differential amplifiers, in single or multiple dc-coupled stages, are generally used for rf and i.f. amplification. This configuration gives excellent high-frequency stability, extremely low distortion (low generation of both cross-modulation and intermodulation spurious responses), and has wide AGC range capability (typically 30-dB gain reduction for a single stage, 55 dB for a dual stage). Double-balanced mixers are typically used to achieve a high degree of isolation between the rf and local oscillator and between the local oscillator and i.f. In some applications the rf stage can be dc-coupled to the mixer with no need for an interstage frequency-selection network. When compared to classical single transistor mixer, the spurious response level of a double-balanced mixer is considerably lower, thereby reducing the stop-band filter requirement. The local oscillator typically employs internal control circuitry to stabilize the amplitude of the waveform and maintain low distortion. This is particularly important in varactor-diode tuning applications where a change in amplitude will tend to shift the bias voltage of the oscillator tank circuit, thereby causing mistracking with the other varactor-tuned circuits. Selectivity is achieved more often by the use of the block filter-block gain configuration rather than the conventional distributed filter, cascade stage design approach. Mechanically vibrating elements can be designed with Q’s much higher than electric circuits. Piezoelectric vibrator plates made of quartz or (for low and medium frequencies) barium titanate form the equivalent of a high-Q coil-and-capacitor combination which can be used singly or in combination to form i.f. filters with steep band-edge selectivity characteristics. The piezoelectric vibrator converts energy from electric to mechanical and back to electric with relatively little loss. Barium titanate resonators have been used in some broadcast receiver designs. While they have the advantage of small size (at 455 kHz), the disadvantage is the numerous spurious responses caused by multiple modes of vibration. Hence those resonators must be used in combination with coils and capacitors to suppress the spurious responses. Three configurations of block filter networks are shown in Fig. 22.2.4. The main frequency selection element is the piezoelectric ceramic filter.4 Both single-ended and full-wave balanced envelope detectors are used, depending on the intended application. The full-wave detector produces a lower level of distortion in the recovered audio over a very wide range, including lower absolute level, of input carrier levels. An AGC amplifier provides a control voltage to the i.f. and rf stages. This voltage is proportional to the carrier amplitude. Second-order filtering is often incorporated to reduce audio distortion, even at low audio frequencies, and to give fast settling time which is advantageous with electronic tuning systems when operated in the “station search” mode. Some IC designs contain an audio preamplifier that provides additional filtering to remove residual carrier from the audio signal. Preamplifier output levels are typically 0.25 to 0.3 V for a 30 percent modulated mediumlevel carrier input to the i.f.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.42

FIGURE 22.2.3 Typical AM receiver integrated circuit. (From Philips Semiconductor)

AM AND FM BROADCAST RECEIVERS

22.42 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.43

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.43

FIGURE 22.2.4 Three configurations of i.f. block filter networks.

IC audio power amplifiers fall into four major categories. Both monaural and two-channel (stereo) versions are available in each category. 1. Fifty to 500 mW, 1.5 to 9 V supply, intended for personal applications with 1.5- to 2-in. loudspeaker or earphone. 2. One-half to 2 W for table model, power-line-connected applications. 3. Two to 20 W, 12 to 16 V supply, for car radios. 4. Ten to 100 W for high-fidelity systems. The higher-powered designs in this category are typically a hybrid of IC’s mounted on a ceramic thick-film substrate. AM Stereo In March 1982, after many months of testing and evaluation by the National AM Stereo Radio Committee, the FCC ruled that it could not select a single system for AM stereo and therefore made a “free marketplace” ruling. Of the five systems under consideration at that time, three were put into operation throughout the United States. These were 1. Compatible quadrature amplitude modulation (C-QUAM) 2. Amplitude and phase modulation (AM/PM) 3. Independent sideband modulation (ISB) By 1993, the C-QUAM system was recognized as the overwhelming marketplace leader and was given full endorsement by the FCC. QUAM can be thought of as the addition of two carriers having the same frequency in phase quadrature (separated by 90°), one being modulated by the left audio signal (L), the other by the right audio signal (R). In practice, this is achieved by amplitude-modulating a single carrier with the sum of L and R, and deviating the carrier phase according to a function made up of the sum of quadrature sidebands of L + R and L – R. At the transmitter a correction factor is added to have the envelope truly represent L + R and therefore maintain compatibility (low distortion in recovered audio) with existing monophonic AM receivers (C-QUAM).5 Stereo identification is provided by a 25-Hz tone phase-modulating the carrier.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.44

AM AND FM BROADCAST RECEIVERS 22.44

BROADCAST AND CABLE SYSTEMS

Carrier level modulator

RF tuner and i.f. amp

− π 4

Balanced modulator (gate)

Right output

Balanced modulator (gate)

Left output

π 4

In-phase phase detector

Lim

π 2

Quadrature phase detector

Loop low-pass filter

V.C.O.

Phase locked loop

FIGURE 22.2.5 Block diagram of a C-QUAM AM-stereo receiver. (From Parker et al., Ref. 6)

The block diagram of a stereo receiver designed to receive C-QUAM is shown in Fig. 22.2.5. The PLL is locked to the carrier frequency and supplies the cw reference for demodulation. The complementary correction to transform the C-QUAM signal to QUAM is performed by multiplying the carrier in the receiver by a function of the angle between the transmitted sidebands. Recovery of the L and R audio signals is accomplished by conventional quadrature demodulation. The receiver front end must be designed to a higherquality level than that of a conventional monophonic AM receiver. Since phase components of the signal are now important, the local oscillator noise and tuning system flutter must be kept to a minimum.6 Incidental carrier phase modulation (ICPM) caused by tuned circuit misalignment must also be minimized.

FM BROADCAST RECEIVERS: GENERAL CONSIDERATIONS Broadcast FM receivers are designed to receive signals between 88 and 108 MHz (3.5 to 2.8 m wavelength). The broadcast carrier is frequency-modulated with audio signals up to 15 kHz, and the channel assignments are spaced 200 kHz. The FM band is primarily intended to provide a relatively noise-free radio service with wide-range audio capability for the transmission of high-quality music and speech. The service range in the FM band is generally less than that obtainable in the AM band, especially when sky-wave signals are relied on for extending the AM coverage area. VHF signals are limited to usable service ranges of less than 70 mi. Since sky-wave signals do not materially affect the transmission of FM signals, there is no equivalent night effect and licenses are not limited to daylight hours only, as with many AM operations. In the past, all FM signals in the United States were horizontally polarized. However, the rules have been changed to allow maximum power to be radiated in both the horizontal and vertical planes. Unlike AM broadcasting, where the station power is measured by the power supplied to the highest-power rf stage, FM transmitters are rated in effective radiated power (ERP), i.e., the power radiated in the direction of maximum power gain. This method of power measurement is used since the transmitting antenna has significant power gain, resulting in an ERP many times the input power supplied to the rf output stage of the transmitter. Although FM receivers have been principally used in high-fidelity audio installations, more recently small pocket-sized FM receivers have been designed with limited audio-output capabilities. Also, an increasing number of FM receivers are being included in automobile installations.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.45

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.45

In 1960 the FCC amended the broadcast rules to allow the transmission of stereophonic signals on FM stations. This transmission is equivalent to the transmission of two audio signals, each having 15 kHz bandwidth, transmitted on the same carrier as used for monophonic FM signals. Since the FM signal was initially designed to have sufficient additional bandwidth to achieve improved signal-to-noise ratio at the receiver, there is room to multiplex the second component of the stereophonic signal with no increase in the radiated bandwidth. However, the signal-to-noise ratio is reduced when the multiplexing technique is employed. Sensitivity The field strength for satisfactory FM reception in urban and factory areas is about 1 mV/m. For rural areas, 50 mV/m is adequate signal strength. These signal levels are considerably lower than the equivalent levels for AM reception in the standard broadcast bands. These effects make satisfactory reception with these lower signal levels possible: (1) the effects of lightning and atmospheric interference (static) are negligible at 100 MHz, compared with the interference levels typical of the standard broadcast band; (2) the antenna system at 100 MHz can be matched to the radio-receiver input impedance, providing more efficient coupling between the signal power incident on the antenna and receiver, and (3) the use of the wide-band FM method of modulation reduces the effects of noise and interference on the audio output of the receiver. The open-circuit voltage of a dipole for any length up to one-half wavelength is given by Eoc = Ef (5.59

Rr )/Fs mV

where Eoc = open-circuit antenna voltage Ef = field strength at antenna (mV/m) Fs = received signal frequency (MHz) Rr = antenna radiation resistance (Ω) For a half-wave dipole Rr = 72 Ω; for a folded dipole Rr is 300 Ω. For antennas substantially shorter than onehalf wavelength, Rr = 8.75l 2Fs2 × 10–3, where l is the total length of the dipole in meters. For a folded dipole one-half wavelength long operating at 100 MHz, the open-circuit voltage is Eoc = 0.97Ef . The voltage delivered to a matched transmission and receiver input is one-half of this value, 0.48Ef . The noise in the receiver output is caused by the antenna noise plus the equivalent thermal noise at the receiver input. The input impedance generates an excess of noise compared with the noise generated by an equivalent resistor at room temperature. The noise generated is given by Enr = En 2 NF − 1 V where Enr = equivalent noise generated in receiver input En = equivalent thermal noise (V) = ( 4Rin kT ∆ f f ) Rin = receiver input resistance k = Boltzmann’s constant = 1.38 × 10–23 J/K T = absolute temperature = 290 K ∆f = half-power bandwidth of receiver response taken at discriminator NF = receiver noise figure. (If the noise figure is given in decibels, NFdB = 10 log NF.) The generator noise and receiver noise add as the square root of the sum of the squares. Figure 22.2.6 shows that the equivalent receiver input noise is 0.707En NF . For a receiver with 300-Ω input resistance and 200-kHz noise bandwidth, En = 0.984 mV. A typical noise factor for a well-designed receiver is 3 dB, or two times power ( 2 voltage ) increase, giving an equivalent noise input of 1.39 mV. In an AM receiver the signal-to-noise (S/N) ratio at the receiver input is a direct measure of the S/N ratio to be expected in the audio output. In an FM receiver using frequency deviation greater than the audio frequencies transmitted, the S/N ratio in the output may greatly exceed that at the rf input. Figure 22.2.7 shows typical output S/N ratios obtained with receiver bandwidths adjusted to accommodate transmitted signals with modulation indexes of 1.6 and 5.0, compared with the audio S/N ratio when AM modulation is used and the bandwidth of the receiver is adjusted to accommodate the AM sidebands only. As shown in this figure, the S/N

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.46

AM AND FM BROADCAST RECEIVERS 22.46

BROADCAST AND CABLE SYSTEMS

FIGURE 22.2.6 Equivalent sources of receiver noise; (a) with resistive generator; (b) with galactic noise source included.

ratio for a properly designed receiver operating with a modulation index of 5 is 18.8 dB higher than that of an AM receiver with the same rf S/N ratio. For rf S/N ratios in FM higher than 12 dB, the audio S/N ratio increases 1 dB for each 1-dB increase in rf S/N ratio. For FM S/N ratios lower than 12 dB, the S/N ratio in the audio drops rapidly and falls below the AM S/N ratio at about 9 dB. The point at which the ratio begins to fall rapidly is called the threshold signal level. It occurs where the carrier level at the discriminator is equal to the noise level. The threshold level increases directly as the square root of the receiver bandwidth, i.e., approximately the square root of the modulation index. The equation for S/N improvement using FM is (S/N)FM (S/N)AM

=

3∆

where ∆ is the deviation ratio. Since broadcast standards in the United States for FM call for a modulation index of 5 for the highest audio frequency, the direct S/N improvement factor is FIGURE 22.2.7 Typical signal-to-noise ratios 18.8 dB for rf S/N ratios exceeding 12 dB. for FM and AM for deviation ratios of 1.6 and 5. In the FM system a second source of noise improvement is provided by pre-emphasis of the high frequencies at the transmitter and corresponding de-emphasis at the receiver. The pre-emphasis network raises the audio level at a rate of 6 dB/octave above a critical frequency, and a complementary circuit at the receiver decreases the audio output at 6 dB/octave, thus producing a flat overall audio response. Figure 22.2.8 shows simple RC networks for pre-emphasis and de-emphasis. The additional S/N improvement using de-emphasis in an FM receiver is (S/N)out fa3 = (S/N)in 3[ fa f02 − f03 tan −1 ( fa / f0 )] where (S/N)out = signal-to-noise at de-emphasis output (S/N)in = signal-to-noise ratio at de-emphasis input fa = maximum audio frequency f0 = frequency at which de-emphasis network response is down 3 dB For fa = 15 kHz and a 75-ms time constant ( f0 = 2.125 kHz), the S/N improvement is 13.2 dB. The total S/N improvement over AM is 32 dB when the carrier is high enough to override noise plus 12 dB for the 75-kHz deviation used in U.S. broadcast stations. The minimum coherent S/N ratio is therefore 44 dB.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.47

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.47

FIGURE 22.2.8 Pre-emphasis and de-emphasis circuits and characteristics.

When a dipole receiving antenna is used, an additional noise component is produced by galactic noise, because the dipole pattern does not discriminate against sky signals. The ratio of signal to galactic noise can be improved by using an antenna array with gain in the horizontal direction, i.e., one that discriminates against sky-wave signals. The additional noise is shown in Fig. 22.2.6. Using the calculated value of En and assuming a 3-dB noise factor in the receiver gives an equivalent noise input to the receiver of 1.39 mV. The required field strength to produce a 12-dB S/N ratio at the receiver and a 44-dB S/N ratio at the audio output when using a half-wide dipole is 11.5 mV/m. Selectivity When the FM system uses a high-modulation index, the system is not only capable of improving the S/N ratio but will reject an interfering co-channel signal. The FM signal modulation involves very wide phase excursions, and since the phase excursion that can be imparted to the carrier by an interfering signal is less than 1 rad, the effect of the interference is markedly reduced. The co-channel-interference-suppression effect requires that the interfering signal be smaller than the desired signal, since the larger signal acts as the desired carrier, suppressing the modulation of the smaller signal. This phenomenon is called the capture effect since the larger signal takes over the audio output of the receiver. The capture effect produces well-defined service areas, since signal-level differences of less than 20 dB provide adequate signal separation. Although it is useful in suppressing undesired signals of a level less than the desired signal, the capture effect can also produce an annoying tendency for the receiver to jump between co-channel signals when fading, for example, caused by airplanes makes the desired signal drop below the interfering signal by only a few decibels. This effect also occurs in FM radios used in automobiles when motion of the antenna causes the relative signal levels to change. Tuners The FM tuner is matched to the antenna input. An rf amplifier is used to override the mixer noise. The mixer provides a 10.7-MHz i.f. signal. Most FM tuners contain a single stage of rf amplification. The mixer may be self-oscillating or employ a separate oscillator. The rf stage must have a low noise figure to reach the minimum threshold-signal level, but its most important requirement is to provide the mixer and i.f. amplifier with signals free of distortion. When the rf amplifier is overloaded, the signal supplied to the i.f. amplifier may be distorted or suppressed. For single interfering signals there are three significant sources of difficulty: (1) image signals may

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.48

AM AND FM BROADCAST RECEIVERS 22.48

BROADCAST AND CABLE SYSTEMS

capture the receiver, suppressing the desired signal; (2) strong signals at one-half the i.f. frequency (5.35 MHz) above the desired signal may capture the receiver; or (3) a strong signal outside the range of the i.f. beat but strong enough to cause limiting in the rf stage may drastically reduce the output of the rf amplifier at the carrier frequency. When two strong signals are present, three conditions may produce unsatisfactory operation: (1) cross modulation of two adjacent upper- or lower-channel signals may produce an on-channel carrier; (2) two strong signals spaced 10.7 MHz in the rf passband may produce an i.f. beat, or (3) submultiple i.f. beats may be produced by strong signals spaced at i.f. submultiple spacings in the rf band. To minimize the effects of distortion and provide a low noise figure, many FM tuners employ an FET type transistor rf stage.

I. F. Amplifiers To provide sufficient image separation, a higher intermediate frequency (10.7 MHz) is used in FM than in standard broadcast AM. The i.f. amplifier must provide sufficient gain for the noise generated by the rf amplifier to saturate the limiting stages fully if the benefits of wide-band FM are to be obtained at low signal levels. The high gain should be supplied with a low noise figure in the first i.f. amplifier stage, so that the noise introduced by the i.f. is small compared with the noise from the rf amplifier. One of the most important characteristics of the i.f. amplifier is phase linearity, since envelope-delay distortion in the passband is a principal cause of distortion in FM receivers. Care must also be taken to avoid regeneration since this would cause phase distortion and hence audio distortion in the detected signal. Although AGC is theoretically unnecessary, it is sometimes used in the rf stage to avoid overload. Such overload, coming before sufficient selectivity is present, could produce cross modulation, causing capture by an outof-band signal. In the classical cascade amplifier design the requirements of high gain and good phase linearity are generally met by using amplifiers with double-tuned circuits adjusted to operate at critical coupling.

Limiters The design of the limiter is critical in determining the operating characteristics of an FM receiver. The limiter should provide complete limiting with constant-amplitude signal output on the lowest signal levels that override the noise. In addition, the limiting should be symmetrical, so that the carrier is never lost at low signal levels. This is essential if the receiver is to capture on the strongest signal when there is little difference in signal strengths between the weaker and the stronger signals. Finally, the bandwidth in the output must be wide enough to pass all the significant sideband terms associated with the carrier, to prevent spurious amplitude modulation due to the lack of sufficient bandwidth to provide the constant-amplitude FM signals. The differential amplifier with dc coupling can be made to provide highly symmetrical limiting.

FM Detectors There are five well-known types of FM detectors: (1) the balanced-diode discriminator (Foster-Seeley circuit); (2) the ratio detector using balanced diodes; (3) the slope-detector-pulse-counter circuit using a single diode with an RC network to convert FM to AM; (4) the locked-oscillator or PLL circuit, which uses a variation in current as the frequency is varied to convert the output of an oscillator (locked to the carrier frequency) to a voltage varying with the modulation; and (5) the quadrature detector circuit that produces the electrical product of the two voltages, the first derived from the limiter, the second from a tuned circuit that converts frequency variations into phase variations. The output of the product device is a voltage that varies directly with modulation.

Circuit Implementation Integrated circuits designed for use in audio broadcast receivers in many parts of the world contain functional combinations of AM/FM and AM/FM stereo circuitry.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.49

FIGURE 22.2.9 Block diagram of AM/FM/TV sound receiver having a digital tuning system.

(From Ref. 9)

AM AND FM BROADCAST RECEIVERS

22.49 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.50

AM AND FM BROADCAST RECEIVERS 22.50

BROADCAST AND CABLE SYSTEMS

Differing classes of these ICs find use in battery operated shirt-pocket radios, portable boom-boxes, lineconnected table radios, component stereo audio systems, and automobile radio receivers. In recent years, designs have been developed to provide broadcast audio reception in personal computers.7 Most ICs contain all the functions from antenna through low-level audio for both AM and FM reception.8–10 A block diagram of one such unit is shown in Fig. 22.2.9.

FM Stereo and SCA Systems Since FM broadcasting uses a bandwidth of 200 kHz and a modulation index of 5 for the highest audio frequency transmitted, it is possible by using lower modulation indexes to transmit information in the frequency range above the audio. This method of using a supersonic carrier to carry additional information is used in FM broadcasting for a number of purposes, most notably in FM stereo broadcasting. In broadcasting stereo the main-channel signal must be compatible with monophonic broadcasting. This is accomplished by placing the sum of the left- and right-hand signals (L + R) on the main channel and their difference (L – R) on the subcarrier. The subcarrier is a suppressed-carrier AM signal carrying the L – R signal. The suppressed-carrier method causes the carrier to disappear when L and R vary simultaneously in identical fashion. This occurs when the L – R signal goes to zero and allows the peak deviation in the monophonic channel to be unaffected by the presence of the stereo subcarrier. In the receiver it is necessary to restore the subcarrier. In the U.S. Standards, the technique used provides a pilot signal at 19 kHz, one-half the suppressed carrier. The frequency subcarrier is restored by doubling the pilot-signal frequency and using the resulting 38-kHz signal to demodulate the suppressed-carrier (L – R) signal. The suppressed carrier has a peak deviation of less than 2, and the subcarrier is amplitude-modulated. The composite stereo signal thus has a S/N ratio 23 dB below that of monophonic FM broadcasting. The main (L + R) channel is not affected by the stereo subcarrier (L – R) signal. The stereo signal can be decoded in two different ways. In the first, the subcarrier is separately demodulated to obtain the L – R signal. The L + R and L – R signals are then combined in a matrix circuit to obtain the individual L and R signals. In the second method, more widely used, the composite signal is gated to obtain the individual L and R signals directly. The circuit uses a doubler circuit to obtain the 38-kHz reference signal. The latter signal is added to the composite signal, and the signal is decoded in a pair of full-wave peak rectifiers to obtain the L and R signals directly. This type of demodulation is also performed in present-day ICs by use of a synchronous demodulator circuit. The 38-kHz reference signal is derived from a 76-kHz oscillator, which is synchronized to the incoming 19-kHz pilot via a PLL and the necessary dividers. This configuration yields a simple circuit consisting of an IC and relatively few simple, i.e., no coils, resistor and capacitor networks.11 In the SCA system an additional subcarrier is placed well above the stereo sidebands at 67 kHz. This subcarrier is used for auxiliary communication services (SCA—subsidiary communications authorization).

DIGITAL RADIO RECEIVERS Since about 1985 the European Broadcast Commission has been evaluating proposals for a digital audio service to include both terrestrial and satellite delivery. In 1987 the Eureka 147 Consortium was established to develop a novel system based on coded orthogonal frequency division multiplex (COFDM) approach and the advanced audio compression scheme now known as MUSICAM. In late 1994, following extensive testing, this proposal became the accepted digital method for Europe. In 1995, it was accepted as the Canadian standard and was well on its way to being a global standard. The advantage of COFDM modulation is that it allows for numerous lowpower broadcast transmitters spaced over a wide area to each broadcast the identical signal on the same identical frequency. As an automobile travels around the region, the reception will continue to have CD quality with no interference or receiver retuning required. A more detailed description of the Eureka 147 system is contained in Ref. 12. A typical DAB (Eureka 147) receiver block diagram is shown in Fig. 22.2.10. Several IC manufacturers make chip sets for this type receiver. Typically two large-scale ICs are used in the design, one for channel decoding and the other for audio source decoding.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.51

FIGURE 22.2.10 DAB receiver block diagram. (From Philips Semiconductor)

AM AND FM BROADCAST RECEIVERS

22.51 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.52

AM AND FM BROADCAST RECEIVERS 22.52

BROADCAST AND CABLE SYSTEMS

FIGURE 22.2.11 The digital signal is included in the spectrum of the analog channel. The digital radio signals fall within the frequencies allocated to stations, the so-called FCC FM or AM masks, indicated by the cross-hatched area. For FM [top], IBOC digital sidebands are placed on either side of the ananlog host signal. For AM [bottom], the setup is similar except that, in addition, some digital carriers are also co-located in the region of the AM host itself. (From Ref. 13)

U.S. IBOC Digital Broadcast Systems for AM and FM In the United States the Eureka 147 system has not been accepted because of the wide bandwidth, 1.5 MHz, needed for each transmission system and the large number of independent broadcasters who would need this bandwidth for each station. There exists no available unused spectrum in the United States for additional services.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.53

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.53

A current proposal, which occupies the same channel space as present AM and FM stations, is being field tested by the Digital Audio Broadcast Subcommittee of the National Radio Systems Committee (NRSC). This system and the staffing belongs to iBiquity Digital Corporation, which was formed as a merger of several other proposals and proponents over the period from the early 1990s through 2001.13 The system provides a digital signal on the upper and lower edges of the main analog (AM or FM) channel from which derives the name inband-on-channel (IBOC) (Fig. 22.2.11). The digital signal will have the same identical program material (simulcast) as the analog transmission. A time delay will exist between the analog and the digital signals in order to provide diversity. In the receiver, this will lead to a more robust reception quality. It is anticipated that sometime in the future as the transition from analog to digital occurs that the analog signal will disappear and the digital signal will occupy the entire channel. This technology will enhance both the AM and FM band’s audio fidelity. AM will sound like FM does today and FM will have compact-disc-like audio. Multipath, noise, and interfering signals that cause the static, hiss, pops, and fades heard on today’s analog radios will be virtually eliminated, thus ensuring crystal clear reception. Additionally, the technologies will allow for new wireless data services to be received from AM and FM stations, such as station information, artist and song identification, traffic, weather, and news and sports scores.14 A receiver design for the IBOC service would have a similar block diagram as present AM or FM receivers with the addition of two blocks: (1) an OFDM decoder IC and time delay memory and control to handle the diversity provided by the two carriers and (2) a decoder IC to restore the audio perceptual audio coder (PAC) compression provided at the transmitter. Several major broadcast manufacturers and radio receiver manufacturers have already signed license agreements with the technology owner. At least 20 broadcast stations situated in major market areas of the United States were involved in the field testing phase. The service was launched in 2003 and receivers are now available.

Radio Broadcast Data Systems Unlike the two systems described earlier (DAB and IBOC), which were “totally digital,” this section will deal with the existing analog FM systems that have digital subcarriers superimposed or added into the existing channel spectrum. These subcarriers would transmit to the receiver information such as station call letters, road conditions, program or music name, and other data believed to be useful to a moving automobile. In Europe the system has been called radio data system (RDS) and has been in operation for several years, whereas in the United States it is known as radio broadcast data system (RBDS). In Japan a similar system called data radio channel (DARC) is being implemented. Currently in the United States numerous FM radio stations are broadcasting RBDS signals and receivers which make some use of RBDS number in the hundreds of thousands.15 Superimposed subcarriers in the United States system exist at 57 kHz (third harmonic of the 19 kHz stereo pilot tone) with injection level of 2 to 3 percent. ICs to decode this signal from the main FM stereo multiplex have been developed by several companies and have been used in receiver design for several years for both RDS and RBDS service. The Japanese DARC system employs a 76 kHz level-controlled minimum shift keying (L-MSK) modulation. The level being controlled by the level of the R − L signal to reduce interference in the L − R channel and therefore reduce noise and interference in the recovered stereo signals. Most of the circuitry added to an FM receiver to handle this service is contained in one LSI-IC.16

Satellite Radio Numerous satellite digital audio radio service (SDARS) proposals have been on the table both internationally and in the United States during the 1990s. At the present time, only four appear to be viable. The WorldSpace system covers Africa (AfriStar launched in October 1999) and Asia (AsiaStar launched in March 2000). A third satellite will cover Central and South America.17 Development of the Japanese Communication Satellite System operated by Mobile Broadcasting Corporation dates from the early 1990s.18,19 Two U.S. services are the XM system (launched in 2001) and the Sirius system (launched in 2002). Both are aimed at auto radio application and are subscription services (XM at $9.95 per month and Sirius at $12.95 per month).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.54

AM AND FM BROADCAST RECEIVERS 22.54

BROADCAST AND CABLE SYSTEMS

Receivers for the WorldSpace system are available from several major radio receiver manufacturers and are on sale in the intended countries. Over 50 channels of low noise, interference-free audio and multimedia programming are available in the 1467–1492 MHz (L–band) spectrum. The system is primarily intended for direct-to-home broadcast. RF modulation is digital QPSK combined with convolutional and Reed-Solomon forward error correction. Ninety-six channels of 16 kbit/s each are combined using time-division multiplex (TDM). High-quality audio is ensured by encoding the audio using the MPEG 2.5 layer 3 compression scheme. The basic diagram of a typical receiver using two ICs is shown in Fig. 22.2.12.20 Several other semiconductor manufacturers have either two IC or three IC solutions for WorldSpace radios. Functional block diagrams for all are very similar. The Japanese system receivers are being developed to handle both satellite and terrestrial signals with automatic switching and antenna diversity. Reception of the signals in the 2.6 GHz band will be by an antenna assembly consisting of two microstrip antennas, i.e., a high gain with a low-noise amplifier for satellite reception and a low-gain unit for the terrestrial pickup. Multimedia signals containing up to 50 programs will be available from the satellite. These include MPEG-4 video, CD-quality audio, and digital information services to mobile receivers.18 The two U.S. SDARS differ from each other in several ways. XM employs two geostationary satellites, while Sirius has three satellites covering the western hemisphere in an elliptical pattern. A technical comparison of the two U.S. satellite radio services is shown in Table 22.2.1.13 Both systems employ three degrees of diversity in order to guarantee uninterrupted service to the receiver. These three are: 1. Spatial. Use of two widely separated satellites and numerous terrestrial repeaters to fill in the holes created by building shadows, tunnels, and other obstructions to the satellite signal in metropolitan areas. 2. Frequency. The two satellites will transmit the same signal but in different frequency bands. 3. Time. A time delay of several seconds will exist between the transmissions of the same signal from each satellite. At the receiver, these diversity situations can be resolved with the strongest and cleanest signal at the moment taking command of the circuitry. The receiving antenna will be of a simple omnidirectional design mounted on the roof or side window of the car. The XM receiver uses two custom-integrated circuits that are fabricated by using CMOS and proprietary high-speed bipolar technology.21 A block diagram of the receiver is shown in Fig. 22.2.13.

FIGURE 22.2.12 Basic WorldSpace receiver. (From Ref. 20)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.55

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.55

TABLE 22.2.1. The U.S. Satellite Digital Audio Radio Services (SDARS) Satellite radio Parameter

Sirius

XM

Comments

IN ORBIT No. of satellites (longitude)

3 (nominal 100° W)

2 (85° and 115° W)

Sirius satellites are in a highly elliptical orbit, rising and setting every 16 h: XM types are geostationary

Uplink frequencies

7060–7072.5 MHz

7050–7075 MHz

Sirius also uses Ku-band (14–12 GHz) uplink to feed repeaters

Downlink frequencies

Satellite elevation angle

2320.0–2324.0 MHz and 2328.5–2332.5 MHz

2332.5–2336.5 MHz and 2341.0–2345.0

60°

45°

Redundant downlink signals are for spatial/frequency/time diversity Typical

ON LAND Location

New York City

No. of studios

Washington, D.C

75

82

No. of terrestrial repeaters

105 (46 markets)

1500 (70 markets)

Repeater EIRP

Up to 40 kW

90% are 2 kW

In main facility Approximate numbers

OTHER CHARACTERISTICS No. of CD-quality (64-kb/s) channels

50

Lower-quality services use 0.5–64 k/s

No. of news-talk-sports channels

50

System is reconfigurable on the fly

Satellite modulation Terrestrial repeater modulation

TDM-QPSK

Each carrier is about 4 MHz wide

TDM-COFDM

Carrier ensemble is about 4 MHz wide

Channel coding scheme

Concatenated

Error-correcting Reed-Solomon outer code and rate 1/2 convolutional inner code

Source coding scheme

Lucent PAC

Nominal rate for top-quality music: 64 k/s

Transmission rate

4.4 M/s

4.0 M/s

Before channel coding

COFDM = coded orthogonal frequency-division multiplexing; EIRP = equivalent isotropic radiated power; PAC = Perceptual Audio Coder; OPSK = quadrature phase shift keying; TDM = time-division multiplexing. Source: Sirius Satellite Radio, XM Satellite Radio.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.56

FIGURE 22.2.13 Block diagram of XM satellite receiver.

AM AND FM BROADCAST RECEIVERS

22.56 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.57

AM AND FM BROADCAST RECEIVERS AM AND FM BROADCAST RECEIVERS

22.57

REFERENCES 1. Audio Bandwidth and Distortion Recommendations for AM Broadcast Receivers. National Radio Systems Committee, June 1990. Also published as EIA/IS-80, Electronic Industries Association, March 1991. 2. TEA555IT Product Specification, Philips Semiconductors, October 1990. 3. “Portable and Home Hi-Fi/Radio Designers Guide,” Philips Semiconductors, June 1996. 4. “Ceramic Filter Applications Manual,” Murata Manufacturing Co., 1982. 5. Parker, N., F. Hilbert, and Y. Sakaie A Compatible Quadrature System for AM Stereo, IEEE Trans. Consum. Electron, November 1977, Vol. CE-23, No. 4, pp. 454–460. 6. Ecklund, L., and O. Richards A New Tuning System for Use in AM Stereo Receivers, IEEE Trans. Consum. Electron., August 1986, Vol. CE-32, No. 3, pp. 497–500. 7. Brekelmans, H., et al. A Novel Multistandard TV/FM Front-end for Multimedia Applications, IEEE Trans. Consum. Electron., May 1998, Vol. 44, No. 2, pp. 280–288. 8. Sato, A., et al. Development of an Adjustment-free Audio Tuner IC, IEEE Trans. Consum. Electron., August 1996, Vol. 42, No. 3, pp. 328–334. 9. Okanobu T., et al. An AM/TV/FM Radio IC Including Filters for DTS, IEEE Trans. Consum. Electron., August 1997, Vol. 43, No. 3, pp. 655–661. 10. Yamazaki, D., et al. A Complete Single Chip AM Stereo/FM Stereo Radio IC, IEEE Trans. Consum. Electron., August 1994, Vol. 40, No. 3, pp. 563–569. 11. Nolde, W., et al. An AM-Stereo and FM-Stereo Receiver Concept for Car Radio and Hi-Fi, IEEE Trans. Consum. Electron., May 1981, Vol. CE-27, No. 2, pp. 135–143. 12. Kozamernik, F. Digital Audio Broadcasting, Chapter 14, in: R. Jurgen (ed.), Digital Consumer Electronics Handbook, McGraw-Hill, 1997. 13. Layer, D. Digital Radio Takes to the Road, IEEE Spectrum, July 2001, Vol. 38, No. 7, pp. 40–46. 14. “What is iBiquity Digital,” www.ibiquity.com, iBiquity Digital Corp., 2000. 15. Clegg, A. The Radio Broadcast Data System, Chapter 15, in: R. Jurgen (ed.), Digital Consumer Electronics Handbook, McGraw-Hill, 1997. 16. Suke, M., et al. Development of DARC Decoding LSI for High-Speed FM Subcarrier System, IEEE Trans. Consum. Electron. August 1994, Vol. 40, No. 3, pp. 570–579. 17. Bonsor, K. How Satellite Radio Works, Marshall Brain’s HowStuffWorks, 2000, available at:www.howstuffworks.com/ satellite-radio 18. Press Release—TDK Semiconductor Corp. and Kenwood Corp., Jan. 21, 1999, available at:www.tdk.co.jp/teaah01/ aah38000.htm 19. “Loral to build digital satellite for Mobile Broadcasting Corp.” Loral Space & Communications News Release, August 20, 2001, available at: www.spaceflightnow.com/news/n0108/20mbsat 20. Bock, C., et al. Receiver IC-Set for Digital Satellite System, IEEE Trans. Consum. Electron., November 1997, Vol. 43, No. 4, pp. 1305–1311. 21. Press Release “XM Announces Delivery of Production Design Chipsets to Radio Manufacturers” March 30, 2001, ST Digital Radio Solutions, STMicroelectronics, available at:http://us.st.com/stonline/prodpres/dedicate/digradio/ digradio.htm

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.58

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 22.3

CABLE TELEVISION SYSTEMS Walter S. Ciciora

INTRODUCTION Cable television service is enjoyed by almost 70 million U.S. households. This is a market penetration of over 68 percent. Cable systems have expanded capacity and added digital signal capability. High-speed cable modems and telephony are important growth areas. Video on demand (VOD) is offered to ever more subscribers. Interactive television is beginning to gain acceptance. It is expected that these growth trends will continue.

HISTORICAL PERSPECTIVE The original purpose for cable television was to deliver broadcast signals in areas where they were not received in an acceptable manner with an antenna. These systems were called Community Antenna Television, or CATV. In 1948, Ed Parson of Astoria Oregon, built the first CATV system consisting of twin-lead transmission wire strung from housetop to housetop. In 1950, Bob Tarlton built a system in Landsford, Pennsylvania, using coaxial cable on utility poles under a franchise from the city. In the mid-1970s, an embryonic technology breathed new life into cable television. This technology was satellite delivery of signals to cable systems, which added more channels than were available from terrestrial broadcasters. While satellites and earth stations were very expensive investments, these programming pioneers understood that the costs could be spread over many cable operators who, in turn, serve many subscribers. Subscribers are offered a variety of video services. The foundation service taken by most subscribers is called basic. Off-air channels, some distant channels, and some satellite-delivered programs are included. The satellite programs include the super stations and some of the specialty channels. Pay television constitutes premium channels, usually with movies and some special events, which are offered as optional channels for an extra monthly fee. Some cable systems offer Pay-Per-View (PPV) programming which is marketed on a program-by-program basis. Recent movies and special sport events are the mainstay of PPV programming. Impulse Pay Per View (IPPV) allows the subscribers to order the program spontaneously, even after it has begun. The ordering mechanism usually involves an automated telephone link or, occasionally, two-way cable. Ways of providing conditional access to allow for a limited selection of service packages at differing price points are often included in the cable system. Simple filters remove the unsubscribed channels in some systems, while elaborate video and audio scrambling mechanisms involving encryption are used in other cable systems. Since cable television systems must use the public right-of-way to install their cables, they, like power, telephone, and gas companies, must obtain a franchise from the local governmental authorities. This is a nonexclusive Note: This overview is based on a CableLabs publication, “Cable Television in the United States.”

22.58 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.59

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.59

franchise. However, experience with multiple cable systems has shown that the economics of the business generally only support one system per community. The decade of the 1990s introduced major changes into the cable industry. From a technical perspective, the addition of optical fiber improved signal quality, increased bandwidth, enhanced reliability, and reduced operating costs while making two-way cable practical. The second major technical advance was the introduction of digital television signals. Digital video multiplied the program capacity dramatically. These two technical advances enabled significant new service opportunities. Near video on demand (NVOD) took advantage of increased signal capacity. True VOD was made possible by the dramatic performance increases and cost reductions on hard drive storage systems. High-speed cable modem services and telephony became possible as cable operators learned to send digital signals reliably. Cable systems changed from entertainment-only systems into general-purpose, comprehensive communications facilities.

SPECTRUM REUSE For NTSC, each television channel consumes 6 MHz because of vestigial side-band amplitude modulation, VSB-AM. Compared with double side-band amplitude modulation’s need for 8.4 MHz, VSB-AM transmits one complete side-band and only a vestige of the other. At the time the standard was created, the design requirements of practical filters determined the amount of side-band included. The consumer’s receiver selects the channel to be watched by tuning a 6-MHz portion of the assigned spectrum. In the terrestrial broadcast environment, channels must be carefully assigned to prevent interference with each other. The result of this process is that most of the terrestrial broadcast television spectrum is vacant. Better television antennas and better television circuits allow more of the spectrum to be used. However, with nearly 300 million receivers and more than 100 million VCRs in consumers’ hands, the changeover process to upgraded systems would be difficult, costly, and require something like 20 years. The rest of the terrestrial spectrum that is not assigned to broadcast has other important uses. These include aircraft navigation and communications, emergency communication, and commercial and military applications. The terrestrial spectrum is too limited to supply the video needs of the U.S. viewer. Cable television is made possible by the technology of coaxial cable. Rigid coaxial cable has a solid aluminum outer tube and a center conductor of copper-clad aluminum. Flexible coaxial cable’s outer conductor is a combination of metal foil and braided wire, with a copper-clad steel center conductor. The characteristic impedance of the coaxial cable used in cable television practice is 75 Ω. The well-known principles of transmission line theory apply fully to cable television technology. The most important characteristic of coaxial cable is its ability to contain a separate frequency spectrum and respect the properties of that separate spectrum so that it behaves like over-the-air spectrum. This means that a television receiver connected to a cable signal will behave as it does when connected to an antenna. A television set owner can become a cable subscriber without an additional expenditure on consumer electronics equipment. The subscriber can also cancel the subscription and not be left with useless hardware. This ease of entry and exit from an optional video service is a fundamental part of cable’s appeal to subscribers. Since the cable spectrum is tightly sealed inside an aluminum environment (the coax cable), a properly installed and maintained cable system can use frequencies assigned for other purposes in the over-the-air environment. This usage takes place without causing interference to these other applications or without having them cause interference to the cable service. New spectrum is “created” inside the cable by the “reuse” of spectrum. In some cable systems, dual cables bring two of these sealed spectra into the subscriber’s home, with each cable containing different signals. The principal negative of coaxial cable is its relatively high loss. Coaxial cable signal loss is a function of its diameter, dielectric construction, temperature, and operating frequency. A ballpark figure is 1 dB of loss per 100 ft.

CABLE NETWORK DESIGN Since cable television is not a general-purpose communication mechanism, but rather a specialized system for transmitting numerous television channels in a sealed spectrum, the topology or layout of the network can be

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.60

CABLE TELEVISION SYSTEMS 22.60

BROADCAST AND CABLE SYSTEMS

customized for maximum efficiency. The topology that has evolved over the years is called tree-and-branch architecture. There are five major parts to a cable system: (1) the headend, (2) the trunk cable, (3) the distribution (or feeder) cable in the neighborhood, (4) the drop cable to the home and in-house wiring, and (5) the terminal equipment (consumer electronics). Flexible coaxial cable is used to bring the signal to the terminal equipment in the home. In the simplest cases, the terminal equipment is the television set or VCR. If the TV or VCR does not tune all the channels of interest because it is not “cable compatible,” a converter is placed between the cable and the TV or VCR tuner. The home is connected to the cable system by the flexible drop cable, typically 150 ft long. The distribution cable in the neighborhood runs past the homes of subscribers. This cable is tapped so that flexible drop cable can be connected to it and routed to the residence. The distribution cable interfaces with the trunk cable through an amplifier called a bridge amplifier, which increases the signal level for delivery to multiple homes. One or two specialized amplifiers called line extenders are included in each distribution cable. Approximately 40 percent of the system’s cable footage is in the distribution portion of the plant and 45 percent is in the flexible drops to the home. The trunk part of the cable system transports the signals to the neighborhood. Its primary goal is to cover distance while preserving the quality of the signal in a cost-effective manner. Broadband amplifiers are required about every 2000 ft depending on the bandwidth of the system. The maximum number of amplifiers that can be placed in a run or cascade is limited by the buildup of noise and distortion. Twenty or 30 amplifiers may be cascaded in relatively high-bandwidth applications. Older cable systems with fewer channels may have as many as 50 or 60 amplifiers in cascade. Approximately 14 percent of a cable system’s footage is in the trunk part of the system. The headend is the origination point for signals in the cable system. When the whole picture is assembled, the tree shape of the topology is evident. The trunk and its branches become visible. The cable industry had an early interest in fiber optics. Since signal losses in optical fiber are orders of magnitude lower than signal losses in coaxial cable, power-consuming amplifiers in the trunk became unnecessary. Initially, laser technology was the most significant challenge to implementing fiber optics in cable systems. The laser had to have extreme linearity in order to avoid distortions of the broadband analog signal. This degree of linearity is not necessary in a digital transmission system. Once the process of producing sufficiently linear lasers were mastered, the rapid roll out of optical fiber in cable systems began. Coaxial cable design had become an optimized, well-understood process prior to the introduction of fiber. Optical fiber offered a wide variety of options that caused cable system design to become much more complex. Many approaches were tried and as the cost of the various components came down, new approaches became possible. The technique had generally become known as hybrid fiber coax (HFC). Optical fiber is relatively inexpensive. It is supplied in a plastic protective sheath that is a bit more expensive on a per foot basis. But the most expensive element is the cost of installation. As a result, when optical fiber is installed, multiple strands of unused fiber are included in anticipation of future needs. These fibers are called “dark fiber” since they have not yet had lasers installed. As future growth in subscribers and services leads to demands for more bandwidth, these fibers are activated. The point where the optical fiber ends and the coaxial distribution begins is called a “node.” The node serves a group of subscribers. The size of the group depends on the demographics and the philosophy of the system designer. The size ranges from a few hundred to a few thousand homes. “Node splitting” is the process of activating dark fibers to create additional smaller nodes out of a larger node. This is done when additional upstream signal return capacity is required or when additional downstream frequency reuse is needed. Downstream frequency reuse is an important concept, giving rise to one of cable’s most sustainable competitive advantages against other communications systems. Since cable trunks or fibers extend from the cable head end into neighborhoods, different programming can be placed on the same frequencies. This technique greatly expands the reach of cable’s spectrum. There are services that dedicate signals to groups of subscribers or individual subscribers. These services include VOD high-speed cable modem services and telephony. These services do not comply with the broadcast model and require directed signals to individual locations. Nodalization, the process of splitting larger nodes into smaller groups, makes this possible with limited spectrum capacity. As these services increase in penetration, they burden the assigned frequencies that are shared among the subscribers connected to a node. Two choices are possible: assign more channels to the service or split the node. So nodalization is a means of creating additional capacity when more spectra are unavailable.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.61

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.61

SIGNAL QUALITY The ultimate goal of the cable system is to deliver pictures of adequate quality at an acceptable price while satisfying stockholders, investors, and holders of the debt generated by the huge capital expenses of building the cable system plant. This is a difficult balancing act. It would be a simple matter to deliver very high-quality video if cost were not a consideration. Experience teaches that subscriber satisfaction is a complex function of a variety of factors led by program quality, and variety, reliability of service, video and sound quality, and the amount of the subscriber’s cable bill. The principal picture impairments can be divided into two categories—coherent and noncoherent. Coherent impairments result in a recognizable interfering pattern or picture. They tend to be more objectionable than noncoherent impairments of equal strength. The principal noncoherent picture impairment is noise. Random noise behavior is a well-understood part of general communication theory. The familiar Boltzmann relationship, noise figure concepts, and the like apply fully to cable television technology. Random noise is the consequence of the statistical nature of the movement of electric charges in conductors. This creates a signal of its own. This noise is inescapable. If the intended analog signal is ever allowed to become weak enough to be comparable to the noise signal, it will be polluted by it yielding a snowy pattern in pictures and a seashore sounding background to audio. Noise levels are expressed in cable system practice as ratios of the video carrier to the noise in a television channel. This measure is called the carrier-to-noise ratio (CNR) and is given in decibels (dB). The target value for CNR is 45 to 46 dB. Noise in the picture, called snow, is just visible when CNR is 42 to 44 dB. Snow is objectionable at CNRs of 40 to 41 dB. Coherent interference includes ingress of video signals into the cable system, reflections of the signal from transmission-line impedance discontinuities, cross modulation of video, and cross modulation of the carriers in the video signal. This latter phenomenon gives rise to patterns on the screen, which are called beats. These patterns often look like moving diagonal bars or herringbones. TABLE 22.3.1 Signal Quality Parameters The evaluation of signal quality takes place on two planes, objective and subjective. In the objective Parameter Symbol Value arena, measurements of electrical parameters are used. These measurements are repeatable. Standardized, Carrier/noise (CNR) C/N 46 dB automated test equipment has been developed and Composite second-order CSO –53 dB accepted by the video industries. Table 22.3.1 lists the Composite triple beat CTB –53 dB parameters usually considered important and the valSignal level at TV 0 to +3 dBmV ues of good, current practice. Signal Quality Target Values The ultimate performance evaluation involves the subjective reaction of viewers. One example of the difficulties experienced is the fact that different frequencies of noise have differing levels of irritation. High-frequency noise tends to become invisible, while low-frequency noise creates large moving blobs, which are highly objectionable. Subjective reactions to these phenomena are influenced by such factors as the age, gender, health, and attitude of the viewer. The nature of the video program, the characteristics of the viewing equipment, and the viewing conditions also impact the result. As time progresses, the level of performance of consumer electronics will continue to increase. As ATV and HDTV are introduced, still more demands will be made on cable system performance. The trend to larger screen sizes also makes video impairments more evident.

DIGITAL VIDEO In the 1980s, the over-the-air broadcast industry attempted to upgrade its services by proposing high-definition television (HDTV). The HDTV baseband analog signal coming from a television camera has substantially greater bandwidth than the NTSC baseband signal. Digitizing the signal results in even greater bandwidth requirements. This made Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.62

CABLE TELEVISION SYSTEMS 22.62

BROADCAST AND CABLE SYSTEMS

complying with the FCC’s 6-MHz channel requirement impossible without further signal processing. Fortunately, the digital nature of the signal enabled massive data compressing. The principle feature of digital signal processing, which enables these data compressions, is inexpensive large-scale memory. Such memory does not exist in analog implementations. Since video consists of a series of 30 complete still pictures every second, which usually have only small portions of the image change from picture to picture, the unchanging portions can be stored in digital memory. Only the differences need to be transmitted. This saves a tremendous amount of bandwidth. For example, if the image is a “talking head” against a stationary background, once the background is transmitted, it can be stored and further retransmission minimized or avoided. Only the relatively small fraction of the picture that changes needs to be transmitted. Also, the motion of much of the images consists of the horizontal or vertical translation of elements of the picture. So rather than resending those elements, an instruction to relocate them will suffice. Even more compression is possible by taking advantage of the human visual perception mechanism. Humans are more sensitive to luminance detail than color detail. Perception of detail diminishes if the object is in motion. Such factors enable still more compression of the visual data. The overall consequence is that only about 2 percent of the data generated by digitizing the analog signal is required for excellent reconstruction of the video at the receiver. The observation was made that if an HDTV signal could be compressed with digital means to fit into a 6-MHz spectral slot, multiple standard-definition TV (SDTV) signals could be placed in that same spectral slot. This development has become commercially much more important than HDTV. The multiplication of capacity by digital compression made direct broadcast satellite (DBS) practical. DBS with a few dozen analog channels simply didn’t have enough variety to be a realistic competitor to 50, 70, or 100 cable channels. However, digital compression enabled DBS to have several hundred channels, much more than cable in analog format. DBS could offer NVOD. The stagger starting of movies on multiple channels reduced the waiting time for a movie. A 2-h movie assigned eight program streams would require a maximum wait time of just 15 min, averaging to 7.5. Several tens of movie titles are offered at various stagger starts yielded a compelling service. The need to launch the digital DBS service in a timely manner prevented waiting for the complete resolution of the details of the broadcast digital television standard. Consequently, the digital broadcast compression standard and the DBS compression system are very similar, but not completely identical. The competitive threat of DBS forced a response from cable that had to be implemented before the broadcast standard was complete. This resulted in yet a third digital approach that was mostly, but not completely, the same as the broadcast standard. The broadcast signal environment is more difficult than that found in cable and both are very different from the satellite system. Thus there are three different, incompatible modulation techniques. The satellite transmitter is a saturation device that is either on or off and transmits only one digital bit per symbol, i.e., time element. The broadcast signal must deal with multipath and low signal strengths at great distances from the transmitter. The broadcast modulation scheme is called 8-VSB (vestigial side band) and transmits three digital bits per symbol. Cable’s signal environment has essentially no multipath and only modest signal strength differences between channels. Consequently, the cable signal can transmit many more digital bits per symbol. Two modulation schemes are common: 64 QAM (quadrature amplitude modulation) carrying five digital bits per symbol and 256 QAM conveying seven digital bits per symbol. QAM is very bandwidth efficient. All of these methods include extensive error detection and error correction methods with the broadcast signal the most protected. These powerful techniques allow for the recovery of transmission errors. An important trade-off exists between image quality and program quantity. The more of the data capacity in a 6-MHz channel that is assigned to one program, the better its image quality. Five or six Mb/s (million bits per second) yields essentially perfect video under nearly all circumstances. But some types of video provide acceptable results with just 1.5 to 2 Mb/s. The decoder at the receiver that converts the bits to the analog signals necessary to drive the display device can accommodate any of these bit rates. So the choice is made at the point of origination of the video. A 256 QAM system has a payload of data of about 38 Mb/s. Thus six program streams of 6 Mb/s can be carried or double that for 3 Mb/s. The bit rate does not have to be the same for all streams or for all times. The data capacity can be allocated based on the type of programming and changed as needed. A further efficiency technique called statistical multiplexing can be employed. Statistical multiplexing averages out the bit capacity assignments depending on the needs of the multiple program streams sent in a 6-MHz channel. Thus when one program has high detail or high action or both, it is temporarily assigned the bits it needs while other less demanding channels make a contribution. This allotment changes from moment to moment depending on the needs of the program streams. Statistically, better quality and perhaps an additional program stream can be accommodated in this manner. The same techniques can be applied in satellite and broadcast. Because of the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.63

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.63

more difficult signal conditions in broadcast, only four to six program streams can be accommodated in the broadcast 6 MHz. The video compression method used by broadcast, cable, and satellite is called MPEG after the committee that standardized it, the Moving Pictures Expert Group. MPEG is actually a collection of standards that can be applied to fit a variety of applications. The standard is unsymmetrical in that the equipment necessary to encode the signal is much more complex than the hardware needed to decode it. This makes very good economic sense since there will be many more receiving devices than transmitters. A further important feature is that improvements in the encoder result in better video reception without the need to modify the decoder. So as more processing power becomes affordable in encoders and as the techniques evolve, improved results are obtained in the system with previously installed receivers. In some cases, the bit rate can be reduced as well. While the United States has selected 8-VSB as its broadcast modulation scheme, much of the rest of the world has selected versions of QAM. Frustratingly, the opportunity for much more signal commonality was not achieved. Signal differences prevent a worldwide receiver that can function on cable, broadcast, and satellite anywhere in the world. Multiple subsystems are required. Fortunately, the digital portion of the signal allows relatively easy interconversion between standards. Techniques for delivering 3 to 6 Mb/s of data in the analog television signal without significant interference to the analog television signal have been developed. These developments might have made the original FCC’s hope of a compatible system possible, at least in cable. But the standards are now too entrenched. However, these methods can be used to add either data or encoded video to an analog signal. The auxiliary service of transmitting data in either an analog or a digital television signal has been called datacasting. A wide variety of services are proposed for this technique including data to computers, additional video signals, downloading to hard drives, and other services.

CABLE SYSTEM TRADE-OFFS The experienced cable system designer has learned how to balance noise, nonlinear distortions, and cost to find a near optimal balance. Signals in cable systems are measured in decibels relative to 1 mV across 75 Ω. This measure is called dBmV. Applying the well-known Boltzmann noise equation to 75-Ω cable systems yields an open-circuit voltage of 2.2 mV in 4 MHz at room temperature. When terminated in a matched load, the result is 1.1 mV. Expressed in dBmV, the minimum room-temperature noise in a perfect cable system is –59.17 dBmV. Starting at the home, the objective is to deliver at least 0 dBmV, but no more than 10 dBmV to the terminal on the television receiver. Lower numbers produce snowy pictures and higher numbers overload the television receiver’s tuner, resulting in cross modulation of the channels. If a converter or descrambler is used, its noise figure must be taken into account. There are two reasons for staying toward the low side of the signal range: cost and the minimization of interference in the event of a signal leak caused by a faulty connector, damaged piece of cable, or defect in the television receiver. Low signal levels may cause poor pictures for the subscriber who insists on splitting in the home to serve multiple receivers. Working our way back up the plant, we need a signal level of 10 dBmV to 15 dBmV at the tap to compensate for losses in the drop cable. The design objectives of the distribution part of the cable system involve an adequate level of power not only to support the attenuation characteristics of the cable but to allow energy to be diverted to subscribers’ premises. Energy diverted to the subscriber is lost from the distribution cable. This loss is called flat loss because it is independent of frequency. Loss in the cable itself is a square-root function of frequency and is therefore contrasted to flat loss. Because of flat losses, relatively high power levels are required in the distribution part of the plant, typically 48 dBmV at the input to the distribution plant. These levels force the amplifiers in the distribution part of the plant to reach into regions of their transfer characteristics that are slightly nonlinear. As a result, only one or two amplifiers, called line extenders, can be cascaded in the distribution part of the plant. These amplifiers are spaced 300 to 900 ft apart depending on the number of taps required by the density of homes. Because the distribution part of the plant is operated at higher power levels, nonlinear effects become important. The television signal has three principal carriers—the video carrier, the audio carrier, and the color subcarrier. These concentrations of energy in the frequency domain give rise to a wide range of “beats” when passed through nonlinearities. To minimize these effects, the audio carrier is attenuated about 15 dB below the video carrier.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.64

CABLE TELEVISION SYSTEMS 22.64

BROADCAST AND CABLE SYSTEMS

When cable systems only carried the 12 VHF channels, second-order distortions created spectrum products that fell out of the frequency band of interest. As channels were added to fill the spectrum from 54 MHz to as much as 650 MHz, second-order effects were minimized through the use of balanced, push–pull output circuits in amplifiers. The third-order component of the transfer characteristic dominates in many of these designs. The total effect of all the carriers beating against each other gives rise to an interference called composite triple beat (CTB). In an older 35-channel cable system, about 10,000 beat products are created. More channels create more beats. Channel 11 suffers the most with 350 of these products falling in its video. Thirdorder distortions increase about 6 dB for each doubling of the number of amplifiers in cascade. A 1-dB reduction in amplifier output level will generally improve CTB by 2 dB. If these products build to visible levels, diagonal lines will be seen moving through the picture. When these components fall in the part of the spectrum that conveys color information, spurious rainbows appear. The design objective of the trunk part of the cable system is to move the signal over substantial distances with minimal degradation. Because distances are significant, lower-loss cables are used. One inch and 0.75 in. diameter cable is common in the trunk, while 0.5 in. cable is found in the distribution. Signal levels in the trunk at an amplifier’s output are 30 to 32 dBmV, depending on the equipment used. Cable trunk is rapidly being replaced by fiber. No new cable systems are being built with coax in the trunk. It has been determined through analysis and confirmed through experience that optimum noise performance is obtained when the signal is not allowed to be attenuated more than about 20 to 22 dB before being amplified again. Amplifiers are said to be “spaced” by 20 dB. The actual distance in feet is a function of maximum frequency carried and the cable’s attenuation characteristic. Modern high-bandwidth cable systems have their amplifiers fewer feet apart than older systems with fewer channels. Since attenuation varies with frequency, the spectrum in coaxial cable develops a slope. This is compensated with equalization networks in the amplifier housings. Since the signal is not repeatedly tapped off in the trunk part of the system, high-power levels are not required to feed splitting losses. As a result, signal levels are lower than in the distribution portion of the plant. Typical levels are about 30 dBmV. For the most part, the amplifiers of the trunk are operated within their linear regions. The principal challenge of trunk design is keeping noise under control. Each doubling of the number of amplifiers in the cascade results in a 3-dB decrease in the CNR at the end of the cascade and a 6-dB increase in the amount of CTB. If the noise at the end of the cascade is unacceptable, the choices are to employ lower noise amplifiers, shorter cascades, or a different technology such as microwave links of fiber optic links.

TECHNICAL DETAIL The balance of this chapter concentrates in more detail on issues relating to the technical performance of cable systems. Some generalizations have been made in order to group explanations. This section is intended to serve as a briefing, so selective trade-offs were made on the amount of detail given. There are always exceptions, for cable systems do not neatly fall into clear types. A reading list is provided. The following topics will be covered: channel capacities and system layouts, FCC spectrum regulation, means of increasing channel capacity, scrambling methods, the interface between the cable and the customer’s equipment, and fiber in cable television practice.

Channel Carriage Capacity Channel carriage capacity is based on radio frequency (RF) bandwidth. It is a useful characteristic for classifying cable systems. There are three types of system (Table 22.3.2). Systems are categorized by their highest operating frequency. Downstream signals are transmitted to the customers’ homes. A cable system configuration consists of: (1) the headend (the signal reception, origination, and modulation point), (2) main trunk (or tree) cable, which is coaxial in older systems and fiber in new and upgraded systems and runs through the central streets in communities, (3) coaxial distribution (branch) cable to the customer’s neighborhood, including distribution taps, (4) subscriber drops to the house, and (5) subscriber terminal equipment (television sets, converter/descramblers, VCRs, and so forth). Distribution plant is sometimes called feeder plant. Programming comes to the headend by satellite signals, off-air signals from

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.65

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.65

TABLE 22.3.2 Classification of Cable Systems

Small Medium

Large

Bandwidth (MHz)

Operating frequencies (MHz)

Number of channels

170 220 280 350 400 500 700 950

50–220 50–270 50–330 50–400 50–450 50–550 50–750 50–1 GHz

12–22 (single coax) 30 (single coax) 40 (single coax) 52 (single coax)/104 (dual coax) 60 (single coax)/120 (dual coax) 80 (single coax) 116 (single coax) 158 (single coax)

broadcast stations, and signals imported via terrestrial microwave. Signals originating from the headend are from a co-located studio facility, VCRs, character generators, or commercial insertion equipment. Plant mileage is calculated using the combined miles of strand that support the coaxial cables in the air and the footage of trenches where cables are installed in the ground. There are about a million miles of plant in more than 11,000 U.S. cable systems. Extension cables, or drops, interconnect main coaxial plant lines to customers’ homes. 220 MHz systems built 15 or more years ago are found in rural areas or in areas with clusters of smaller established communities. Some of these systems operate trunk lines running over 20 mi with 50 or more amplifiers in cascade. Total plant mileage for an average 220 MHz systems extends from 50 to 500 mi and services up to 15,000 cable customers. New construction of 220 MHz systems occur only where there are small numbers of potential customers (no more than 300) and where plant mileage does not exceed 10 mi. Medium capacity cable systems operate with upper frequencies at 270 and 300 MHz, and total bandwidths of 220 and 280 MHz, respectively. 270 MHz systems deliver 30 channels, while 330 MHz systems deliver 40. Although new cable systems are seldom built with 40-channel capacity, plant extensions to existing 270-, 300-, and 330-MHz systems occur. Electronic upgrade is frequently employed to increase 270 MHz systems to 330 MHz. Some 220 MHz systems are upgrading to 300 MHz. Medium capacity systems account for about 75 percent of total plant mileage. They serve a wide range of communities, from rural areas (population of 5000 to 50,000) to some of the largest systems built in the late ’70s. Large capacity cable systems achieve high channel capacities through extended operating frequencies and through the installation of dual, co-located, coaxial cable. Single coaxial cable systems range from 54-channel 400 MHz to 80-channel 550 MHz. With dual cable, it is not unusual to find 120 channels. Large capacity cable systems account for about 15 percent of total cable plant miles. They are primarily high-tech systems designed for large urban areas previously not cabled. Recently, cable systems extending to 750 MHz have been built. Three cable systems with 1 GHz bandwidths have been constructed to date, the first in Queens, New York. That system, called Quantum, introduced NVOD for the first time to U.S. subscribers. Many cable systems are now constructed so that they can easily be upgraded to 750 MHz or 1 GHz. It is not uncommon to use GHz-rated passive components and amplifier housings to facilitate a later upgrade. Large capacity systems are designed, and some operate, as two-way cable plant. In addition to the downstream signals to the customers (50 MHz to upper band edge), upstream signals are carried from customers to the cable central headend, or hub node. They are transmitted using frequencies between 5 and 30 MHz.

Channelization There are three channelization plans to standardize the frequencies of channels. The first plan has evolved from the frequency assignments that the Federal Communications Commission (FCC) issued to VHF television broadcast stations. This plan is called the standard assignment plan. The second channelization plan is achieved by phase locking the television channel carriers. It is called the incrementally related carriers (IRC) plan. The IRC plan was developed to minimize the effects of third-order

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.66

CABLE TELEVISION SYSTEMS 22.66

BROADCAST AND CABLE SYSTEMS

distortions generated by repeated amplification of the television signals as they pass through the cable plant. As channel capacities increased beyond 36 channels, composite, third-order distortions became the limiting distortion. The third channelization type is the harmonically related carriers (HRC) plan. It differs from the standard and IRC plan by lowering carrier frequencies by 1.25 MHz. With HRC, carriers are phase locked and fall on integer multiples of 6 MHz starting with channel 2 at 54 MHz. This plan was created to further reduce the visible impact of amplifier distortions. FM radio services are carried at an amplitude that is 15 to 17 dB below Channel 6’s video carrier level. The services are carried on cable in the FM band slot of 88 to 108 MHz. In an IRC channel plan, Channel 6’s aural carrier falls at 89.75 MHz, which reduces the available FM band to 90 to 108 MHz. Low-speed data carriers are transmitted in the FM band or in the guard band between Channels 4 and 5 in a standard frequency plan. The amplitude of these carriers in at least 15 dB below the closest video carrier level. Frequencies Under Regulation FCC rules and regulations govern the downstream cable frequencies that overlap with the over-the-air frequencies used by the Federal Aviation Administration (FAA). The frequencies are from 108 to 137 MHz and from 225 to 400 MHz. They are used by the FAA for aeronautical voice communications and navigational information. Since cable plant is not a perfectly sealed system, the FCC and FAA want to maintain a frequency separation between signals carried on cable and frequencies used by airports near the cable system boundaries. In 400-MHz systems, over 30 channels are affected by the FCC rules on frequency offset and related operating conditions. Effects of the FCC Rules The maximum, unregulated, carrier power level rule has been reassessed and changed. The previous limit of 1 × 10–5 W (28.75 dBmV) has been raised to 1 × 10–4 W (38.75 dBmV). Carriers with power levels below 38.75 dBmV are not required to follow the frequency separation and stability criteria. Carriers within ±50 kHz of 156.8 MHz, ±50 kHz of 243 MHz, or ±100 kHz of 121.5 MHz, which are emergency distress frequencies, must be operated at levels no greater than 28.75 dBmV at any point in the cable system. Increasing Channel Capacity There are several ways to increase channel capacity. If the actual cable is in good condition, channel capacity is upgraded by modifying or replacing the trunk and distribution amplifiers. If the cable has seriously deteriorated, the cable plant is completely rebuilt. Upgrades (Retrofitting) and Rebuilds An upgrade is defined as a plant rehabilitation process that results in the exchange or modification of amplifiers and passive devices (such as line splitters, directional couplers, and customer multitaps). A simple upgrade requires new amplifier circuit units called hybrids. A full upgrade replaces all devices in the system. In an upgrade project, most of the cable is retained. Goals of an upgrade project include increasing the plant’s channel capacity and system expansion to outlying geographic areas. New amplifier technology such as feedforward and/or power doubling circuitry and advances in amplifier performance have greatly enhanced the technical and financial viability of upgrades. Upgrades are often the least expensive solution to providing expanded service. In a rebuild, the outside plant is replaced. System Distortion and System Maintenance Constraints on the design and implementation of cable systems are imposed by each device used to transport or otherwise process the television signal. Each active device adds small distortions and noise to the signal.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.67

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.67

Even passive devices contribute noise. The distortions and noise compound so that with each additional device the signal becomes less perfect. Any nonlinear device, even bimetallic junctions, cause distortions. The primary contributors are the slight nonlinearities of amplifiers. Because the amplifiers are connected in cascade, the slight damage to the signal accumulates. Noise in any electronic system can come from many sources. The major source is the random thermal movement of electrons in resistive components. For a cable system at 20°C or 68°F, the thermal noise voltage in a 4-MHz bandwidth will be 1.1 mV or –59.1 dBmV. This is the minimum noise level, or noise floor. Noise contributions from amplifiers add on a power basis, with the noise level increasing 3 dB for each doubling of the number of identical amplifiers in cascade. Eventually, the noise will increase to objectionable levels. The difference between the RF peak level and the noise level is measured to quantify the degree of interference of the noise power. The power levels in watts are compared as a ratio. This is called the signal-to-noise ratio, or SNR. In a cable system, the apparent effect of noise is its interference with the video portion of the TV channel. This level is compared to the video carrier and called the carrier-to-noise ratio (CNR). As the CNR value decreases, the interference of noise with the signal becomes visible as a random fuzziness, called snow, that can overwhelm the picture resolution and contrast. The point where the picture becomes objectionably noisy to viewers is at a CNR = 40 dB. In well-designed systems, the CNR is maintained at 46 dB. While an increase in signal level would improve the CNR, unfortunately, there can be no level increase without increases in distortions. The distortion products of solid-state devices used in cable amplifiers are a function of the output levels and bandwidths. The higher the signal level, the greater the distortion products produced. Modern amplifiers use balanced configurations, which almost completely cancel the distortion caused by the squared term of the amplifier’s transfer characteristic. The dominant remaining distortions are called triple beats. They are caused by the cubed term. Because distortion products add on a voltage basis, the composite triple beat (CTB) to carrier ratio changes by 6 dB for each doubling of the number of amplifiers in cascade, whereas the CNR decreases by 3 dB for each doubling. As signal levels are increased in the distribution sections, additional allowances must be made in the system design. As a rule of thumb, CNR is determined primarily by the conditions of trunk operation and signalto-distortion ratio (SDR) primarily by the conditions of distribution operation. Two other factors limit the geography of a cable system. Cable attenuation rises with increasing frequency. More equal gain amplifiers are required to transmit the signal a given distance. But noise limits the maximum number of amplifiers used. The second factor is that amplifier distortion is a function of channel loading. The more channels carried, the greater the distortions.

System Reflections Signal reflections occur throughout the cable plant and are called microreflections. They are caused by the individual slight errors in impedance match. The severity of the mismatch is measured by the magnitude of the return-loss ratio. The larger the return loss, the better. Perfection is infinite. Mismatches include connectors, splices, and even damage to the cable itself. Microreflections are likely to be a serious problem for ATV transmission because of their digital nature.

Phase Noise Phase noise is added to the original signal through modulation and frequency conversion processes. A significant amount of phase noise must be added to the video carrier before generated impairments become perceptible. Narrowband phase noise (measured 20 kHz from the video carrier) in a TV channel produces variations in the luminance and chrominance levels that appear as an extremely grainy pattern within the picture. The perceptibility level of phase noise on the video carrier is 53 dB below the carrier at 20 kHz. If the frequency conversion or modulation processes are operating close to specification, phase noise impairments should not be perceptible on the customer’s TV unless the converter/descrambler is malfunctioning or is of poor quality.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.68

CABLE TELEVISION SYSTEMS 22.68

BROADCAST AND CABLE SYSTEMS

Amplifier Distortions and Their Effects New amplifier technology based on feedforward and power-doubling techniques increases power levels with fewer distortions. However, additional sources of minutely delayed signals have been created. The signal delays produced in these amplifiers have similar end results in picture degradation as the delayed signals generated by reflected signals in the cable plant. But they are caused by a different mechanism. These amplifiers use parallel amplification technology. The signals are split, separately amplified, and then recombined. With a feedforward amplifier, the signals are purposely processed with delay lines. If the propagation time is not identical through each of the amplifiers’ parallel circuits, signals will be recombined that are delayed by different amounts of time. In most circumstances, the amount of differential delay is small and will not produce a visible ghost, but it may cause loss of picture crispness. Since the hybrids used in these amplifiers are normally provided in matched pairs or in a single hybrid package, these delays are only a problem when the hybrids are not replaced as a matched set. In systems that carry more than 30 channels, CTB is the limiting distortion. However, cross modulation (X-MOD) distortions, which is often the limiting factor in systems with less than 30 channels, can reappear as the controlling factor in dictating system design. The HRC and the IRC channelization plans discussed in the first section were developed to minimize the visible degradation in picture quality that is caused by CTB. X-MOD is one of the easiest distortions to identify visually. Moderate X-MOD appears as horizontal and vertical synchronizing bars that move across the screen. In severe cases, the video of multiple channels is visible in the background. Moderate CTB is the most misleading distortion since it appears as slightly noisy pictures. Most technicians conclude that there are low signal levels and CNR problems. CTB becomes visible as amplifier operating levels exceed the design parameters. Once CTB reaches a severe stage, it becomes more readily identifiable because it causes considerable streaking in the picture. Composite second order (CSO) beats can become a limiting factor in systems that carry 60 or more channels and use the HRC or IRC channelization plans. This distortion appears as a fuzzy herringbone pattern on the television screen. The CSO beats fall approximately 0.75 and 1.25 MHz above the video carrier in a television channel. An IRC channelization will frequency-lock these beats together while increasing their amplitude relative to the carrier level. Hum modulation caused by the 60-Hz amplifier powering is identified by its characteristic horizontal bar that rolls through the picture. If the hum modulation is caused by the lack of ripple filtering in the amplifier power supply, it will appear as two equally spaced horizontal bars that roll through the picture.

Frequency Bands Affected by Radio Frequency Interference Discrete beat products can be difficult to identify by the displayed picture impairment. Radio frequency interference that leaks into the cable system from nearby RF transmitters causes spurious carriers to fall in the cable spectrum. Common sources of signal leakage are cracked cables and poor quality connections. When either of these situations happen, strong off-air television and FM radio broadcast signals interfere. If television stations are carried at the same frequency on cable as broadcast and the headend channel processing equipment is phase locked to the off-air signal, the effects of this interference will be ghosting. The ghost appears before (to the left of ) the cable signal since propagation time through the air is less than through cable. If the signals are not phase locked together, lines and beats appear in the picture. Often there is interference from off-air signals due to consumer electronics hardware design. If the internal shielding of the equipment is inadequate, the internal circuits will directly pick up the signal. This phenomenon is called DPU for direct pick-up interference. This is the original motivation for cable converters. Those early set-top boxes tuned no more channels than the TV set, but they protected against DPU by incorporating superior shielding and connecting to the TV set through a channel not occupied off-air. DPU can be misleading. When the subscriber switches to an antenna, he might receive better pictures than from the cable connection. He concludes that his TV receiver is operating correctly and the cable system is faulty. The only convincing argument is a demonstration with a receiver, which does not suffer from DPU. Viacom Cable has measured off-air field intensities of eight volts per meter. The German specification for immunity to DPU is 4 v/m. VCR tuners are generally inferior to TV tuners because the VCR market is even

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.69

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.69

more price competitive. The Electronic Industries Association (EIA) and NCTA Joint Engineering Committee are studying this issue. The second most likely source of radio frequency interference is created by business band radios, paging systems, and amateur radio operators. These signals leak into the cable system and interfere with cable Channels 18 through 22 and Channels 23 and 24 (145 to 175 and 220 to 225 MHz). It is easy to determine that these signals are caused by an RF transmitter because of the duration of the interference and, sometimes, by hearing the broadcast audio. Since the signals are broadcast intermittently, it is almost impossible to determine the exact location(s) of ingress. Cable systems that operate above 450 MHz may find severe forms of interference. They are subjected to high-power UHF television stations, mobile radio units and repeaters, as well as a group of amateur radio operators signals in the top 10 to 12 channels. The extreme variation of shortwave signals in time and intensity makes location of the point(s) of infiltration of these signals difficult. The upstream 5 to 30-MHz spectrum is a problem for operators who have two-way cable systems. There are many sources of interference and these signals accumulate upstream. In a two-way plant, a single leak in the system can make that portion of the upstream spectrum unusable throughout the entire plant; whereas in the downstream spectrum a leak may only affect a single customer’s reception.

Signal Security Systems Means of securing services from unauthorized viewership of individual channels range from simple filtering schemes to remote controlled converter/descramblers. The filtering method is the commonly used method of signal security and is the least expensive.

Trapping Systems There are two types of filtering or trapping schemes: positive trapping and negative trapping. In the positive trapping method, an interfering jamming carrier(s) is inserted into the video channel at the headend. If the customer subscribes to the secured service, a positive trap is installed at the customer’s house to remove the interfering carrier. The positive trapping scheme is the least expensive means of securing a channel where less than half the customers subscribe. A drawback to positive trap technology is its defeatability by customers who obtain their own filters through theft, illegal purchase, or construction. Another drawback is the loss of resolution in the secured channel’s video caused by the filter’s effect in the center of the video passband. Pre-emphasis is added at the headend to correct for the filter’s response, but loss of picture content in the 2- to 3-MHz region of the baseband video signal remains. Negative trapping removes signals from the cable drop to the customer’s home. The trap is needed for customers who do not subscribe. This is the least expensive means of securing a channel when over half the customers subscribe. The negative trap is ideal. There is no picture degradation of the secured channel because the trap is not in the line for customers who take the service. A drawback occurs for customers who do not subscribe to the secured service but want to view adjacent channels. These customers may find a slightly degraded picture on the adjacent channels due to the filter trapping out more than just the secured channel. This problem becomes more significant at higher frequencies, owing to the higher Q (efficiency) required of the filter circuitry. From a security standpoint, it is necessary for the customer to remove the negative trap from the line to receive an unauthorized service. Maintaining signal security in negative trapped systems depends on ensuring that the traps remain in the drop lines.

Scrambling and Addressability There are two classes of scrambling technologies: (1) RF synchronization suppression systems and (2) baseband scrambling systems. The concept of addressability should be considered separately from the scrambling method. Non-addressable converter/descramblers are programmed via internal jumpers or a semiconductor memory chip called a

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.70

CABLE TELEVISION SYSTEMS 22.70

BROADCAST AND CABLE SYSTEMS

programmable read only memory (PROM) to decode the authorized channels. These boxes’ authorization must be physically changed by the cable operator. Addressable converters are controlled by a computer-generated data signal originating at the headend either in the vertical blanking interval (VBI) or by an RF carrier. This signal remotely configures the viewing capabilities of the converter. Impulse-pay-per-view (IPPV) technology is supported by addressable converter/descrambler systems. RF Synchronization Suppression Systems. Converter-based scrambling systems that perform encoding and decoding of a secured channel in an RF format comprise the commonly used scrambling technology. The more common is known as gated or pulsed synchronization suppression. With this method, the horizontal synchronizing pulses (and with some manufacturers, the vertical synchronization pulses) are suppressed by 6 dB and/or 10 MHz dB. This is done in the channel’s video modulator at the IF frequency. The descrambling process in the converter/descrambler occurs at its channel output frequency. This is accomplished by restoring the RF carrier level to its original point during the horizontal synchronization period. Variations of this method pseudorandomly change the depth of suppression from 6 to 10 dB or only randomly perform suppression. A phase-modulated RF scrambling technique based on precision matching of SAW filters constructed on the same substrate has been introduced. This low-cost system is extending operators’ interest in RF scrambling techniques for use within addressable plants. Baseband Scrambling Systems. Baseband converter/descrambler technology provides a more secure scrambling technology for delivering video services. The encoding format is a combination of random or pseudorandom synchronization suppression and/or video inversion. Because the encoding and decoding are performed at baseband, these converter/descramblers are complex and expensive. Maintenance of the system’s video quality is an ongoing issue. The encoders are modified video processing amplifiers. They provide controls to uniquely adjust different facets of the video signal. The potential for setup error in the encoder, in addition to the tight tolerances that must be maintained in the decoders, has presented challenges to the cable operator.

Off-Premises Systems The off-premises approach is compatible with recent industry trends to become more consumer electronics friendly and to remove security-sensitive electronics from the customer’s house. This method controls the signals at the pole rather than at a decoder in the home. This increases consumer electronics compatibility since authorized signals are present in a descrambled format on the customer’s drop. Customers with cable-compatible equipment can connect directly to the cable drop without the need for converter/descramblers. This allows the use of all VCR and TV features. Interdiction technology involves a scheme similar to that of positive trap technology. In this format, the pay television channels to be secured are transported through the cable plant in the clear (not scrambled). The security is generated on the pole at the subscriber module by adding interference carrier(s) to the unauthorized channels. An electronic switch is incorporated allowing all signals to be turned off. While this method of signal access control has generated a lot of interest, practical problems and its incompatibility with advanced services and digital signals has precluded any wide-spread application.

Digital Encryption One of digital technology’s main advantages is the applicability of computer-processing power and features. As a result, the old challenges of scrambling and signal security can be approached with new techniques. However, the advanced digital technology that makes better security possible also provides new means of attack for the would-be pirate. The microprocessors that are the heart of personal computers are available in ever-faster speeds. More memory, more hard disk speed and capacity, and high-speed cable modems make these more potent means of attack. Additionally, the high-speed cable modem makes it possible to combine the processing power of multiple personal computers. Consequently, it must be possible to respond to a security attack by replacing the breached security element. This can be done in two ways. If the cable operator owns

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.71

CABLE TELEVISION SYSTEMS CABLE TELEVISION SYSTEMS

22.71

the set-top boxes, the cable operator can replace the set-top boxes when it is judged that the security breach is excessive. This occurs when the breach is relatively accessible to a significant fraction of the subscriber base, not just a handful of experimenters. However, if the set-top boxes were sold to subscribers, this option is not readily available. In the case of set-top boxes sold to subscribers, the security element must be replaceable in the event of significant breach. A plug-in card or module called the point of deployment (POD) module is used. The POD remains the property of the cable operator and can be electronically disabled. The subscriber would then be given a new POD device based on a system not yet breached.

Signal Splitting at the Customer’s Premises The common devices at the cable drop to the customer’s home are grounding safety devices called ground blocks and a two-way signal splitter that sometimes has a built-in grounding terminal. Splitters or ground blocks should have little effect on picture quality provided there is adequate signal to handle the splitter’s loss. The signal strength may be below specifications because of an excessively long drop or more activated cable outlets in the house than the cable design anticipated. To compensate, some systems use an AC-powered drop amplifier. These amplifiers can create problems—a reduced CNR or increased distortions. Consumer electronics switching devices, designed to allow convenient control and routing of signals between customer products and cable systems’ converters/descramblers, have built-in amplification stages to overcome the losses associated with the internal splitters. These amplifiers add distortions or noise. When cable systems were designed, consumer electronics switching devices were not taken into account because they did not exist. Signal splitting in VCRs can be a problem. To compensate for recording SNR deficiencies, inexpensive VCRs sometimes split the signal unequally between the by-pass route and the VCR tuner. This gives the VCR a stronger signal than the TV receiver to improve VCR performance. In addition, this strategy reduces the quality of the signal routed to the TV. When it is compared with VCR playback, the disparity in performance is reduced. Consumer Electronics Compatibility Cable systems are becoming more consumer friendly by trapping popular secured services. More cable-ready television sets are appearing in the customer’s home. With VCRs that are also cable ready, some of the interface issues are becoming simpler. Some television sets have built-in A/B selector switches and video input ports that allow signal source switchings to be performed through the television’s remote control. Up to three A/B switches, two splitters, and two converter/descramblers have been wired into configuration allowing the consumer to watch and record the programming desired. Even with this configuration, consumers lose the ability to preprogram the VCR to record more than one channel. Hopefully, the days of complex interfaces will soon be over. A positive step in this direction is the EIA Decoder Interface Interim Standard, IS-105. This connection system is oriented toward supporting external, low cost, descramblers. If a descrambler is connected to the TV set or VCR via this technique, the user regains use of the advanced features precluded by converters.

PROJECTIONS FOR DIGITAL CABLE The advent of digital television has created new complexities for the interface between cable and consumer electronics, as well as exciting new opportunities. What can or may transpire is governed not only by technical developments but by rules and regulations imposed by Congress and the FCC, as well as by consumer preferences and the marketing objectives of the cable industry, broadcasters, and consumer electronics manufacturers. Many of these factors are related and are discussed at length on the accompanying CD-ROM, in the section on Projections for Digital Cable. Also discussed therein are Digital Must Carry, Cable Modems and Telephone, and Interactive Television.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_22.qxd

10/28/04

11:20 AM

Page 22.72

CABLE TELEVISION SYSTEMS 22.72

BROADCAST AND CABLE SYSTEMS

BIBLIOGRAPHY Adams, M., “Open Cable Architecture, The Path to Cable Compatibility and Retail Availability in Digital Television,” ISBN 1-57870-135-X. Bartlett, E. R., “Cable Television Technology & Operations,” McGraw-Hill, 1990, ISBN 0-07-003957-7. Brinkley, J., “Defining Vision, The Battle for the Future of Television, How Cunning, Conceit, and Creative Genius Collided in the Race to Invent Digital, High-Definition TV,” ISBN 0-15-100087-5. Ciciora, W., J. Farmer, and D. Large, “Modern Cable Television Technology: Video, Voice, and Data Communications,” ISBN 1-55860-416-2. Farnsworth, E. G., “Distant Vision, Philo T. Farnsworth, Inventor of Television,” ISBN 0-9623276-0-3. Grant, W. O., “Cable Television,” 3rd ed., GWG Associate, 1994, Library of Congress Cataloging-in-Publication Data, Application number: TXu 661-678. Hodge, W. W., “Interactive Television,” McGraw-Hill, 1995, ISBN 0-07-029151-9. Laubach, M., D. Farber, and S. Dukes, “Delivering Internet Connections over Cable,” ISBN 0-471-38950-1. National Cable Television Association, Technical Papers, NCTA Science & Technology Dept., NW, 20036, 1996, ISBN 0-940272-24-5. NCTA Recommended Practices, NCTA Science & Technology Dept. NW, 20036, 1st ed., 1983, ISBN 0-940272-09-1. Rzeszewski, T. S. (ed.), “Color Television,” IEEE Press, 1983, ISBN 0-87942-168-1. Rzeszewski, T. S. (ed.), “Digital Video, Concepts and Applications Across Industries,” IEEE Press, 1983, ISBN 0-78031099-3. Rzeszewski, T. S. (ed.), “Television Technology Today,” IEEE Press, 1985, ISBN 0-87942-187-8. Schwartz, M., “Information, Transmission, Modulation, and Noise,” 2nd ed., McGraw-Hill, 1970, ISBN 07-055761-6. Society of Motion Picture and Television Engineers, SMPTE J., January 1985 to present, ISSN: 0036-1682. Southwick, T., “Distant Signals, How Cable TV Changed the World of Telecommunications,” ISBN 0-87288-702-2. Standage, T., “The Victorian Internet, The Remarkable Story of the Telegraph and the Nineteenth Century’s On-Line Pioneers,” ISBN 0-8027-1342-4. Taylor, A. S., “History between their Ears, Recollections of Pioneer CATV Engineers,” ISBN 1-89182-101-6. Thomas, J. L., “Cable Television Proof-of-Performance,” Prentice Hall, 1995, ISBN 0-13-306382-8. Weinstein, S. B., “Getting the picture,” IEEE Press, 1986, ISBN 0-87942-197-5.

ON THE CD-ROM Ciciora, W. Projections for Digital Cable, including Digital Must Carry, Cable Modems and Telephony, and Interactive Television. Useful cable television URLs for further study.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 23

NAVIGATION AND DETECTION SYSTEMS Electromagnetic wave theory is fundamental to all navigation and detection systems. Global positioning system (GPS) applications have brought an added dimension to this area and in many applications works with radar systems in more sophisticated applications. The basic operation of the radar system has not changed in years; however, computers and digital technology have significantly enhanced the way data are processed. Underwater sound systems are nothing more than radar systems at lower frequencies. We can work with underwater systems like we do for the higher-frequency electromagnetic systems to communicate, navigate, detect, track, classify, and so on. We are just working with substantially longer wavelengths. C.A.

In This Section: CHAPTER 23.1 RADAR PRINCIPLES THE RADAR SYSTEM AND ITS ENVIRONMENT RADAR-RANGE EQUATIONS DETECTION TARGETS AND CLUTTER RESOLUTION RADAR PROPAGATION SEARCH-RADAR TECHNIQUES TRACKING AND MEASUREMENT REFERENCES

23.3 23.3 23.6 23.12 23.19 23.23 23.28 23.37 23.47 23.59

CHAPTER 23.2 RADAR TECHNOLOGY RADAR TRANSMITTERS RADAR ANTENNAS MICROWAVE COMPONENTS RADAR RECEIVERS EXCITERS AND SYNCHRONIZERS SIGNAL PROCESSING DISPLAYS REFERENCES

23.61 23.61 23.64 23.71 23.73 23.77 23.80 23.82 23.83

CHAPTER 23.3 ELECTRONIC NAVIGATION SYSTEMS TYPES OF SYSTEMS GEOMETRIC DILUTION OF PRECISION INTERNATIONALLY STANDARDIZED SYSTEMS SUMMARY OF ELECTRONIC NAVIGATION SYSTEM CHARACTERISTICS REFERENCES

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

23.84 23.84 23.91 23.92 23.101 23.102

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.2

NAVIGATION AND DETECTION SYSTEMS

CHAPTER 23.4 UNDERWATER SOUND SYSTEMS PRINCIPLES, FUNCTIONS, AND APPLICATIONS PROPAGATION NOISE TRANSDUCERS AND ARRAYS PRODUCTION OF SOUND FIELDS TRANSDUCER MATERIALS TRANSDUCER ARRAYS PASSIVE SONAR SYSTEMS ACTIVE SONAR SYSTEMS BIBLIOGRAPHY

23.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

23.104 23.104 23.104 23.108 23.111 23.114 23.118 23.121 23.123 23.128 23.137

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 23.1

RADAR PRINCIPLES David K. Barton

THE RADAR SYSTEM AND ITS ENVIRONMENT Radar is an acronym for radio detection and ranging, and is defined7 as a device for transmitting electromagnetic signals and receiving echoes from objects of interest (targets) within its volume of coverage. The signals may be in the frequency range from the high-frequency radio band (3 to 30 MHz) to light (1015 Hz), although most systems operate between 300 MHz and 40 GHz. The target is a passive reflecting object in primary radar, while in secondary radar a beacon (transponder) is used to reinforce and identify the echo signal. A radar system in its environment is shown in Fig. 23.1.1. Transmission is typically through a directional antenna, whose beam can either scan a coverage volume (search radar) or follow the echo from a previously detected target (tracking radar). Transmission is over a line-of-sight path through the atmosphere or space, except for over-the-horizon radar using ionospheric bounce paths. The target can be a man-made object (e.g., an aircraft or missile) or a natural surface or atmospheric feature (land formation, ocean wave structure, or precipitation cloud). The target echo is often received with the same antenna used for transmission, from which it is routed to the receiver by a duplexer. Environmental factors that influence radar performance include the characteristics of the propagation path (attenuation, refraction, and reflection) and reception of noise and interference from objects both within and beyond the radar beam (thermal radiation, accidental or international radio interference, and echoes from unwanted target objects, called clutter). The radar information may be used locally by an operator, or may be transmitted as analog or digital data to a remote site or network. Radar Frequencies Radar can be operated at any frequency at which electromagnetic energy can be generated and radiated, but the primary bands used are identified in Table 23.1.1. The band letter designations provide the radar engineer with a convenient way of specifying radar frequency with sufficient accuracy to indicate the environmental and antenna problems, but without disclosing potentially secret tuning limitations. The International Telecommunications Union (ITU) defines no specific service for radar, and the assignments listed are derived from those radio services which use radar: radiolocation, radionavigation, meteorological aids, earth exploration satellite, and space research. Where the ITU defines UHF as extending from 300 to 3000 MHz, radar engineers use L and S bands to refer to frequencies above 1000 MHz. The applications of the several frequency bands are discussed in the following section. Radar Functions and Applications The basic functions performed by the radar are implicit in the definition: target detection, and measurement of target position (not only range but angles and often radial velocity). Other measured data may include target amplitude, size and shape, and rate of rotation. The major fields of applications are listed in Table 23.1.2. 23.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.4

RADAR PRINCIPLES 23.4

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.1 Radar system in its environment.

Radar Subsystems. A block diagram of a simple, coherent pulsed radar is shown in Fig. 23.1.2. The dashed lines divide the equipment into seven subsystems, according to the technology used in implementation (see the following chapter on radar technology). In this radar, operation is controlled by a synchronizer, which initiates the pulsing of the transmitter and the gates and triggers that control the receiver, signal processor, and display. The synchronizer may also serve as the frequency reference used in the exciter to generate the r.f. drive signal, at carrier frequency f0, to the transmitter and the local oscillator signals used for downconversion of the signal to intermediate frequency fc in the superheterodyne receiver. The system is termed coherent because the exciter maintains the transmission (carrier) frequency at a consistent phase with respect to the local oscillator signals. The transmitted pulse is generated by amplification of the exciter r.f. drive, during the pulse supplied by the modulator and triggered by the synchronizer. This pulse is passed to the antenna through the duplexer, an r.f. switch that connects the transmitter to the antenna, with low loss, during the pulse transmission. During transmission, the duplexer protects the sensitive receiver from damage by establishing a short circuit across the receiver input

TABLE 23.1.1

Band designation HF VHF UHF L S C X Ku K Ka V W mm

Radar Frequency Bands (IEEE Standard 521-1984)

Nominal frequency range

Specific frequency range for radar based on ITU assignments for regin 2 (N. and S. America)

3–30 MHz 30–300 MHz 300–1000 MHz 1–2 GHz 2–4 GHz 4–8 GHz 8–12 GHz 12–18 GHz 18–27 GHz 27–40 GHz 40–75 GHz 75–110 GHz 110–300 GHz

No specific assignment for radar 138–144 and 216–225 MHz 420–450 and 890–942 MHz 1.215–1.4 GHz 2.3–2.5 and 2.7–3.7 GHz 5.25–5.925 GHz 8.5–10.68 GHz 13.4–14 and 15.7–17.7 GHz 24.05–24.25 GHz 33.4–36 GHz 59–64 GHz 76–81 and 92–100 GHz 126–142, 144–149, 231–235, and 238–248 GHz

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.5

RADAR PRINCIPLES RADAR PRINCIPLES

TABLE 23.1.2 Radar Applications Type of application Air surveillance

Space and missile surveillance Surface search and battlefield surveillance

Weather radar Tracking and guidance

Astronomy and geodesy

Specific applications Long-range early warning Ground-controlled intercept, air-route surveillance Acquisition for weapon system, height finding and three-dimensional radar, air collision avoidance Ballistic missile early warning, missile acquisition, satellite surveillance Sea search, navigation and collision avoidance, ground mapping Mortar and artillery location Airport taxiway control, intrusion detection, land vehicle collision avoidance Observation and prediction, weather avoidance (aircraft), cloud-visibility indicators Antiaircraft fire control, surface fire control, missile guidance, range instrumentation, satellite instrumentation, precision approach and landing Smart weapons, projectiles, bombs Planetary observation, earth survey, ionospheric sounding

Usual bands UHF, L L S

VHF, UHF X, Ku, Ka C, X Ku, Ka S, C C, X, Ku Ka, V, W VHF, UHF, L

FIGURE 23.1.2 Block diagram of simple, coherent pulsed radar.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

23.5

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.6

RADAR PRINCIPLES 23.6

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.3 Block diagram of phased-array radar (Ref. 2).

terminal. The antenna shown is a parabolic reflector, steered mechanically in one or two axes by servomechanisms. The received signal passes from the antenna to the receiver through the duplexer, which has disconnected the transmitter and established a low-loss path to the receiver terminal. A low-noise r.f. amplifier is used at the receiver input, followed by a mixer for conversion to intermediate frequency (i.f.). Following amplification of several tens of decibels, the signal passes to the signal processor, which is designed to optimize the ratio of signal to noise and clutter. Outputs to the display will consist, ideally, of target echoes, appearing at locations on the display corresponding to the target range and angles. In a tracking radar, the signal outputs are fed back to control the antenna steering and the position of the range gate in the receiver. A more advanced radar, using a computer-controlled phased-array antenna, is shown in Fig. 23.1.3. An eighth subsystem has been added, consisting of the control computer and a track processor, both implemented digitally. Beam steering is also controlled digitally by the beam steering processor. The simple synchronizer has been replaced by a digitally controlled waveform generator. Digital control is also applied to the receiver, signal processor and display. The phased-array radar may be programmed to perform a variety of functions in rapid sequence, including search, target tracking, and tracking and guidance of interceptor missiles. Such a radar is known as a multifunction array radar (MFAR), favored in modern U.S. weapon systems. Phased-array radar may be used solely for search (usually as a three-dimensional, or 3D radar, scanning in elevation as well as in azimuth), or solely for tracking, in which case multiple-target tracking is possible through rapid sequencing among several targets.

RADAR-RANGE EQUATIONS A radar range equation is a relationship by which the detection range expected on a given target by a given radar may be calculated. Different equations may be derived, depending on whether the radar and its mode of operation can be specified in detail or only in a more general way, and on the complexity of the environment to which the calculation is applicable.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.7

RADAR PRINCIPLES RADAR PRINCIPLES

23.7

Basic Radar Range Equation The basic radar range equation applies to a radar for which the known parameters include observation time (the time during which the beam lies on the target), and which operates in a benign environment (thermal noise only). The peak signal power S received by the radar antenna over a free-space path is calculated from the transmitted peak power Pt and a series of factors that describe, basically, the geometry of the radar beam relative to the target: S=

PG t t Ar σ (4π )2 R 4

(1)

Gr λ 2 4π

(2)

where Gt = transmit antenna gain Ar = effective receive antenna aperture area s = target cross section R = target range Since the receive aperture and gain are related by Ar =

where l is the carrier wavelength, the signal power can also be given as S=

2 PG t t Gr λ σ 3 4 (4π ) R

(3)

Thermal noise in a radar receiver is a combination of receiver-generated and environmental noise, extending over the entire radar frequency band with a power density N0 given by N0 = kTs

(4)

where k is the Boltzmann’s constant = 1.38 × 10–23 W/(Hz ⋅ K) and Ts is the system noise temperature referred to the antenna terminal. The system noise temperature8 is calculated from the receiver noise factor Fn, line losses Lr, and antenna temperature Ta: Ts = Ta + Tr + LrTe Ta =

0.88Ta′ − 254 + 290 La

(5) (6)

Tr = Ttr(Lr − 1)

(7)

Te = T0(Fn − 1)

(8)

where Tr = temperature contribution of loss Lr T′a = sky temperature from Fig. 23.1.4 La = antenna ohmic loss Ttr = physical temperature of the receiving line Te = temperature contribution of the receiver T0 = standard temperature (290 K) used in measuring noise factor It can be seen that low noise factor and low losses can lead to Ts < 290 K when the antenna is looking upward into space.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.8

RADAR PRINCIPLES 23.8

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.4 Sky temperature for an idealized antenna (lossless, no earth-directed sidelobes) located at the earth’s surface, as a function of frequency, for a number of beam elevation angles. Solid curves are for geometric-mean galactic temperature, sun noise 10 times quiet level, sun in unity-gain side lobe, cool-temperature-zone troposphere, 2.7-K cosmic black-body radiation, zero ground noise. Upper dashed curve is for maximum galactic noise (center of galaxy, narrow-beam antenna), sun noise 100 times quiet level, zero elevation angle, other factors the same as the solid curves. Lower dashed curve is for the minimum galactic noise, zero sun noise, 90° elevation angle.8

The noise power at the i.f. output of the receiver will depend on receiver bandwidth and gain, but this noise power is equivalent to an input power at the antenna terminal of N = N 0 Bn = kTs Bn

(9)

where Bn is the noise bandwidth of the i.f. filter. For a wideband filter and a simple pulse (Bn >> 1/t, where t = pulse width) the signal peak is not affected by the filter, and S/N = S/kTsBn. In general, however, the SNR at the receiver i.f. output is calculated from the ratio of received pulse energy St to noise density: Ideal energy ratio for single pulse: E1 Ptτ GtGr λ 2σ = N 0 (4π )3 R 4 kTs

(10)

Intermediate-frequency power ratio for single pulse: P τ G G λ 2σ E1 S = = t 3 t 4r N N 0 Lm (4π ) R kTs Lm

(11)

where Lm = i.f. filter matching loss. For a rectangular pulse, this loss is shown in Fig. 23.1.5a, as a function of the product Bnt for different filter shapes. For a system using linear-fm pulse compression (chirp), the loss will be a function of the weighting applied to achieve the desired time-sidelobe level, as shown in Fig. 23.1.5b. The expression of i.f. output SNR in terms of transmitted pulse energy Ptt permits the range equation to be used with any type of pulse modulation, without in-depth knowledge of that modulation and its processing

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.9

RADAR PRINCIPLES RADAR PRINCIPLES

23.9

in the receiving system. Most systems will be characterized by i.f. matching loss Lm between 0.7 and 1.5 dB, and little error will result from assuming Lm = 1 dB even if the actual modulation and processing are completely unknown. When the radar maintains coherence and processes the target echo over a coherent processing interval tf, the output of the i.f. processor will be Pavt f GtGr λ 2σ S E = = N N 0 Lm (4π )3 R 4 kTs Lm

FIGURE 23.1.5 Intermediate-frequency filter matching loss: (a) rectangular pulse with different filter shapes; (b) linear-fm pulse compression with weighting for different sidelobe levels.

(12)

where Pav = average transmitter power = Ptt fr for pulsed radar operating at pulse repetition frequency fr, and Lm = matching loss of the filter to the entire waveform received over tf s. In this form, the equation can be applied directly to a radar using any type of waveform (and processing): pulsed (noncoherent processing), by setting tf = tr, the pulse repetition interval; pulsed (coherent doppler processing) or CW, where tf = 1/Bf is the integration time of the doppler filter with bandwidth Bf . In the limit, for systems with coherent processing, tf = to, the total observation time during which the target lies within the radar beam, and there will be only one output sample from the doppler filter. In most cases, however, tf < to, and there will be n′ = to/tf independent output samples available for subsequent noncoherent integration.

RF Loss Factors Equations (10) to (12) consider free-space transmission conditions and idealized radar operation (except for a possible matching loss). In practice, a number of other factors must be included in expressing the energy ratio of the received signal. a. Signal attenuation prior to receiver: Transmission line loss Lt; antenna losses (to the extent they are not included in Gt, Gr, Ts); receiving line and circuit losses at r.f. (to the extent they are not included in Ts); atmospheric attenuation La (see Figs. 23.1.20 to 23.1.22); and atmospheric noise (included in Ts through T¢a, Fig. 23.1.4, for clear air, and using Eq. (54) when precipitation is present). b. Surface reflection and diffraction effects: Pattern-propagation factor F, calculated from Eq. (57) with data from Figs. 23.1.24 and 23.1.25, appears as F4 in the numerator of the radar equation (F > 1 implies extra gain from the surface interaction). c. Antenna pattern and scanning: The gains Gt and Gr are defined on the beam axis, and apply directly for tracking radar, with signal energy calculated for arbitrarily chosen observation time to; for one-coordinate scan at w rad/s, the observation time is calculated using to = q3/w, where q3 is the one-way halfpower beamwidth (in radians) in the scan coordinate, and signal energy is calculated using gains Gt and Gr, evaluated at the point of the scan nearest the target; for two-coordinate scan, the observation time is to = tsqaqe/ys, where qa and qe are azimuth and elevation beamwidths, and ys is the solid angle searched in time ts, and signal energy is calculated using maximum gains. The variation in actual antenna gains brought to bear on the target, as a function of target position in the beam, is included as a beamshape loss in calculation of the deductibility factor, which is the energy ratio required to achieve given detection performance. d. Signal-processing losses: A number of losses resulting from nonideal signal processing are discussed in Radar Functions and Applications, and included in the deductibility factor.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.10

RADAR PRINCIPLES 23.10

NAVIGATION AND DETECTION SYSTEMS

When the r.f. losses are included, the equations for energy ratio become P τ G G λ 2σ F 4 E1 = t 3t 4r N 0 (4π ) R kTs Lt Lα

(13)

Pavt f GtGr λ 2σ F 4 E = N 0 (4π )3 R 4 kTs Lt Lα

(14)

Calculation of Detection Range in Noise The deductibility factor for noncoherent integration is denoted by Dx(n,ne), where n is the number of noncoherently integrated pulses and ne ≤ n is the number of independent target samples included in the integration. By setting E1/N0 = Dx(n,ne), we may solve for the maximum range at which the given detection performance is achieved, for a noncoherent system: Rm4 =

Ptτ GtGr λ 2σ F 4 (4π )3 kTs Dx (n, ne ) Lt Lα

(15)

The average target cross section σ is used in this equation, and any target fluctuation effects are included in the deductibility factor. When coherent integration is used, the deductibility factor becomes Dx(n¢,ne) where n′ is the number of independent noise samples available from a doppler filter. By setting E/N0 = Dx(n¢,ne), we may solve for the maximum range at which the given detection performance is achieved, for any system (including the noncoherent system, for which tf = tr, Pavtf = Ptt, and n¢ = n): Rm4 =

Pavt f GtGr λ 2σ F 4 (4π )3 kTs Dx (n′, ne ) Lt Lα

(16)

It should be noted that Eqs. (15) and (16) are transcendental, since the atmospheric loss La depends on range Rm. It may be necessary to use an iterative calculation, in which an initial estimate of Rm1 is made with La1 = 1, followed by one or two refinements in which La2 is evaluated at Rm1 to calculate Rm2, and La 3 is evaluated at Rm2 to calculate the final Rm. Another method is to apply Eq. (13) or (14) repeatedly with varying range until the resulting energy ratio equals the required Dx, or to perform such calculations at fixed range intervals and interpolate to find the range giving the required Dx. If the factor F4 shows oscillatory behavior with target range, as it may when surface reflections are present, there may be several values of range at which the equation is satisfied, corresponding to the beginnings and endings of detection regions. The deductibility factor Dx used in the range equations will be found from the theoretical value derived in Radar Functions and Applications for the various fluctuating target cases, increased by the filter matching loss Lm (Fig. 23.1.5), the beamshape loss Lp, and the miscellaneous signal processing loss Lx. Search-Radar Equation The potential performance of a search radar, or of a tracking radar during acquisition scan, can be determined from its average power, receiving aperture, and system temperature, without regard to its frequency or waveform. The steps in deriving optimum search performance from Eq. (16) are as follows: a. Group all r.f. and other losses into a combined search loss given by Ls =

Lm Lt Lα L2p Li L f Lc Lx Ln F4

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(17)

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.11

RADAR PRINCIPLES RADAR PRINCIPLES

23.11

where L2p = two-coordinate beamshape loss Li = integration loss Lc = collapsing loss Lf = fluctuation loss Lx = miscellaneous signal processing loss Ln = a beamwidth factor b. Assume uniform search, without overlap, of an assigned solid angle ys in a time ts using a rectangular beam whose solid angle is

ψ b = θ aθ e =

4π 1/t, followed by matched video 2Bv = 1/t 5. Receiver outputs mixed at video, where M = number of receivers 6. i.f. filter followed by gate of width tg and by video integration

r = 1 + Bnt (use Lc in place of Lm) r=M  τg  ρ = Bnτ  1 +   τ 

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.17

RADAR PRINCIPLES RADAR PRINCIPLES

23.17

FIGURE 23.1.11 Fluctuation loss versus detection probability for Rayleigh fluctuating target (Swerling Case 1).

degrees of freedom, corresponding to the I and Q components of the reflected signal, each of which has a gaussian distribution. The figure gives data for 1 < n < 10, and the loss will be a few tenths of a decibel higher for integration of tens or hundreds of pulses. Having calculated the detectability factor for a steady target, we can now adjust this to find the detectability factor D1(n) for the Rayleigh fluctuating target: D1 (n) = D0 (n) L f (1) =

D0 (1) Li (n) L f (1) n

(35)

For example, for Pd = 0.90, Pf = 10–6, n = 100, we can estimate Lf = 8.6 dB, from which the total energy requirement will be 100 × D1(100) = +27.5 dB and the single-pulse requirement D1(100) = +7.5 dB.

Reduction of Fluctuation Loss with Diversity As in communications, the application of diversity can reduce the required fade margin (fluctuation loss). When the number of diversity samples (independent samples of target amplitude) available for integration is ne, the fluctuation loss becomes10 L f (ne ) = [ L f (1)]1/ ne

(36)

1 [ L (1)] ne f dB

(37)

or, in decibels, [ L f (ne )]dB =

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.18

RADAR PRINCIPLES 23.18

NAVIGATION AND DETECTION SYSTEMS

Since diversity samples must be integrated noncoherently, the integration loss will increase as ne increases. The optimum number of diversity samples, when coherent integration would otherwise be available, depends on the required Pd, and for Pd = 0.90 there is a broad optimum between four and eight samples. Diversity is available in time, frequency, polarization, or aspect angle. Time Diversity.

The number of independent target samples available in an observation time to is ne = 1 + to /tc ≤ n

(38)

In the limit, when tc < tr, ne = n, the detectability factor is that given by Swerling for the Case 2 (rapidly fluctuating) target. Coherent integration cannot be carried out on the Case 2 target, but such rapid fluctuation is not normally seen for stable targets. Frequency Diversity. The number of independent target samples available when the n pulses are distributed uniformly over a frequency interval ∆f is ne = 1 + ∆f/fc

(39)

where fc = c/2Lr and Lr is the radial dimension of the target (along the radar line of sight). For example, a target of length Lr = 15 m will provide an independent target sample for each 10 MHz of frequency shift. A dualdiversity radar system, with two fixed frequencies separated by any amount greater than fc, will provide ne = 2, regardless of the number of pulse transmitted. There are two degrees of freedom for each channel, and hence the signal available for integration at the combined output of the two channels will have four degrees of freedom, corresponding to the statistics of the Swerling Case 3 model. Use of pulse-to-pulse frequency agility with adequate total bandwidth can provide ne = n, giving Swerling Case 2 statistics. Polarization Diversity. Use of two orthogonal polarizations can also provide ne = 2. A system operating with pulse-to-pulse frequency agility on two orthogonal polarizations can provide Swerling Case 4 statistics, ne = 2n. It is not necessary to have separate Lf curves for each Swerling case, since Eq. (.37) permits all data to be calculated with sufficient accuracy from Fig. 23.1.11. Aspect Angle (Space) Diversity. Radar systems using more than one site can observe a target over spatially diverse paths, obtaining independent samples on each path. If the echo signals are then combined for integration at a central point, diversity gain can be achieved. This mode of operation leads to complexity and high cost, because of the need to duplicate equipment and supporting facilities and the need to compensate for differing signal delays and doppler shifts before combining the signals. Hence, it is not usually a practical option for the system designer. Detectability Factor with Diversity. The final value of theoretical detectability factor with diversity, De(n,ne), will be found using Lf (ne) in place of Lf (1) in Eq. (35). The value Dx(n, ne) used in the radar equation will be this theoretical value increased by filter mismatch, beamshape, and miscellaneous signal processing losses.

Cumulative Probability of Detection It is not always possible to combine successive signals from a given target through integration. For example, if the signals are obtained from more than one scan across the target position, or if the repetition interval is long and the target velocity high, the target may have moved in range by an amount greater than the radar resolution, and successive signals will not remain in the same integration cell. The conventional means of using all the signal information in such cases is to accumulate the probabilities of detection resulting from each observation, rather than the energy of the observations. If the probability of detection on each observation (or scan) is Pd, the probability of obtaining at least one detection on k observations is the cumulative probability of detection, Pc = 1 − (1 − Pd )k

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(40)

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.19

RADAR PRINCIPLES RADAR PRINCIPLES

23.19

The cumulative probability of detection will build up quite rapidly over several scans, even when the singlescan probability is below 50 percent. Large fluctuation loss can be avoided by scanning the search region several times with low Pd. However, such a process is less efficient than integration of energy from all scans with adequate diversity.

TARGETS AND CLUTTER Target Cross Section The primary parameter used to describe a radar target is its radar cross section, defined as 4p times the ratio of the reflected power per unit solid angle in the direction of the source to the power per unit area of the incident wave. A large sphere (whose radius a >> l) captures power from an area pa2, scattering it uniformly in solid angle, and hence has a radar cross section s = pa2 equal to its projected area. The variation of sphere cross section with wavelength (Fig. 23.1.12) illustrates the division into three regions of the spectrum:

FIGURE 23.1.12 Normalized cross section of a sphere (Ref. 1).

a. The optical region, a >> l, where cross section is essentially constant with wavelength. b. The resonant region, a ≈ l/2p, where the cross section oscillates about its optical value due to interference of the direct reflection with a creeping wave, propagated around the circumference of the object. c. The Rayleigh region, a > l, s (0) = 4pA2/l2 flat plate viewed normal to surface where A = wL is the plate area, assumed >> l2. The cross section of a cylinder or a rectangular plate varies with aspect angle, with a pattern in the plane that includes dimension L given by    2π L  sin θ    sin   λ  σ (θ ) = σ (0)  cosθ    2π L sin θ   λ  

2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(41)

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.20

RADAR PRINCIPLES 23.20

NAVIGATION AND DETECTION SYSTEMS

For most other shapes, it is necessary to calculate cross section from complex equations or computer codes, or to measure it with a calibrated radar. Amplitude Distributions The cross section of a complex object is best described statistically by its probability density function, examples of which are shown in Fig. 23.1.13. These functions represent the Swerling fluctuation models Cases 1 and 2: dP =

Cases 3 and 4: dP =

 −σ  1 exp   dσ σ  σ   −2σ  exp   dσ 2  σ  σ



σ ≥0 σ ≥0

(42)

(43)

where σ is arithmetic mean of the distribution. The median value s50 is used as the center of each plot. Spectra and Correlation Intervals. Swerling Cases 1 and 3 describe slowly fluctuating targets, for which the correlation time tc is such that all pulses integrated in the time to within a single scan are correlated but successive

FIGURE 23.1.13 Amplitude distributions of cross section. Upper plots: Cases 1 and 2; lower plots: Cases 3 and 4.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.21

RADAR PRINCIPLES RADAR PRINCIPLES

23.21

scans separated by ts give uncorrelated values: to 0.02 m

10–13r2/l4

(49) (50)

These values are plotted in Fig. 23.2.16, with rain values extended into the millimeter-wave bands, based on measurements.12

RESOLUTION Definition of Resolution A target is said to be resolved if its signal is separated by the radar from those of other targets in at least one of the coordinates used to describe it. For example, a tracking radar may describe a target by two angles, time delay, and frequency (or doppler shift). A second target signal from the same angle and at the same frequency, but with different time delay, may be resolved if the separation is greater than the delay resolution (processed pulse width) of the radar. Resolution, then, is determined by the relative response of the radar to targets separated from the target to which the radar is matched. The antenna and receiver are configured to match a target signal at a particular angle, delay, and frequency. The radar will respond with reduced gain to targets at other angles, delays, and frequencies. This response function can be expressed as a surface in a five-dimensional coordinate system, the fifth coordinate representing amplitude of response. Because five-dimensional surfaces are impossible to plot,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.24

RADAR PRINCIPLES 23.24

NAVIGATION AND DETECTION SYSTEMS

and because angle response is almost always independent of delay-frequency response, these pairs of coordinates are usually separated, requiring only two three-coordinate plots.

Antenna Resolution In angle, the response function c (q,f) is simply the antenna voltage pattern. It is found by measuring the system response as a function of angles from the beam center. It has a main lobe in the direction to which the

FIGURE 23.1.17 Efficiency and beamwidth constant for antennas with tapered illumination: (a) efficiency versus sidelobe level; (b) beamwidth constant versus sidelobe level.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.25

RADAR PRINCIPLES RADAR PRINCIPLES

23.25

antenna is scanned, and sidelobes extending over all visible space. Angular resolution, i.e., the main-lobe width in the q and f coordinates, is generally taken to be the distance between the −3-dB points of the pattern. The width, amplitude, and location of the lobes are determined by the aperture illumination (weighting) functions in the two coordinates across the aperture. Because the matched antenna is uniformly illuminated, its response has relatively high sidelobes which are objectionable in most radar applications. To avoid these, the antenna illumination may be mismatched (tapered), with resulting loss in gain and broadening of the main lobe. Figure 23.1.17 shows the effects of tapering for sidelobe control on the gain and beamwidth of a rectangular antenna. As the sidelobe level is reduced, aperture efficiency h (the ratio of gain to that of the uniformly illuminated antenna having the same dimensions) falls below unity. At the same time, the beamwidth (which would have been q3 = 0.886l/w for uniform illumination) increases. Waveform Resolution Time delay and frequency can also be viewed as if they were two angular coordinates, i.e., as a two-dimensional plane above which the response can be plotted to describe the filter response to a given signal as a function of the time delay td and the frequency shift fd of the signal relative to some reference point to which the radar is matched. Points on the surface are found by recording the receiver output voltage while varying these two target coordinates. The response function c(td, fd) is given, for any filter and signal, by ∞

χ (td fd ) = ∫

−∞

χ (td fd ) = ∫

−∞

H ( f ) A( f − fd ) exp ( j 2π ftd )d f

(51)

h(td − t )a(t ) exp ( j 2π fd t )dt

(52)

or ∞

where the functions A( f ) and a(t), H( f ) and h(t), are Fourier transform pairs describing the signal and filter, respectively. The transform relationship, Eqs. (50) and (51), governing the time-frequency response function are similar to those which relate the far-field antenna patterns to aperture illumination functions. Hence, data derived for waveforms can be applied to antennas, and vice versa, by interchanging analogous quantities between the two cases. There is a significant difference between waveform and antenna response functions and constraints, however, because the two waveform functions (in time delay and frequency) are dependent on each other through the Fourier transform. The two antenna patterns (in q and f coordinates) are essentially independent of each other, depending on aperture illuminations in the two aperture coordinates x and y. Further differences arise from the two-way pattern and gain functions applicable to the antenna case. Ambiguity Function of a Single Pulse When the filter response is matched to the waveform, H( f ) = A*( f ), h(t) = a* (td − t), the magnitude of the squared response function |c(td, fd)|2 is called the ambiguity function of the waveform. Figure 23.1.18 shows the square root of this function (the voltage response of the matched filter) for three pulses with different modulations. For a simple pulse with constant carrier frequency and phase (Fig. 23.1.18a), there is a single main lobe whose time response is a triangle extending over ±t in time with zero amplitude outside that region. In frequency, the function has the (sin2x)/x2 shape, with sidelobes extending over infinite bandwidth. Introduction of phase modulation during the transmission of the pulse broadens the frequency spread of the response and narrows the response along the time axis. This is the principle of pulse compression, of which the most common form is linear fm (chirp), shown in Fig. 23.1.18b. With the linear-fm function, very low sidelobes can be obtained in region on both sides of the main, diagonal response ridge. Along this ridge, however, the response falls very slowly from its central value, and targets separated by almost one transmitted pulse width will be detected if they are offset in frequency by the correct amount. Pseudorandom phase coding can generate a single narrow spike in the center of the ambiguity surface (Fig. 23.1.18c), at the expense of largeamplitude sidelobes elsewhere on the ambiguity surface.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.26

RADAR PRINCIPLES 23.26

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.18 Ambiguity functions for single pulses: (a) response for a constant carrier pulse with rectangular envelope; (b) chirp response for Hamming weighting on transmit and receive; (c) response for the 13-element Barker code (Ref. 4).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.27

RADAR PRINCIPLES RADAR PRINCIPLES

23.27

The plots of Fig. 23.1.18 illustrate an important property of the ambiguity function: the total volume under the surface is equal to the signal energy (or to unity, when normalized) for all waveforms. Compression of the response along the time axis must be accompanied by an increase in response elsewhere in the time-frequency plane, either along a well-defined ambiguity ridge (for linear fm) or in numerous random lobes (for randomphase codes). For mismatched filters, the response function (cross-ambiguity function) has similar properties, although the central peak will be reduced in amplitude. Mismatched filters are often used to reduce sidelobe levels in linear-fm pulse compression. The matching efficiency and pulse broadening factors have the same relationship to sidelobe levels as were illustrated in Fig. 23.1.17 for aperture efficiency and beam broadening for tapered aperture illuminations. Ambiguity Functions of Pulse Trains The principle of constant volume under the ambiguity function is also applicable to pulse trains. A train of coherent pulses of width t, with pulse repetition interval tr = 1/fr >> t, merely generates a repeating set of surfaces similar to Fig. 23.1.18a at intervals tr in time. The added volume equals the energy of the additional pulses, and if the energy is normalized to unity (by division among the several pulses), the amplitude of each response peak is reduced proportionately. The location of peaks and associated sidelobes is shown in Fig. 23.1.19a. The result is to form a series of responses spaced in range by the unambiguous range interval Ru. When the signal is coherent over an observation time to = ntr, the time response of the matched filter stretches the ambiguity function to ±to along the time axis. At the same time, there is formed a series of ambiguous responses spaced in radial velocity by the blind speed vb, as shown in Fig. 23.1.19b, c. Within the central lobes, this response is concentrated in spectral lines separated by fr and approximately 1/to = fr /n wide in frequency (Fig. 23.1.19b). Near the ends of the ambiguity function, where the matched-filter impulse response overlaps n¢ < n pulses of the received train, the lines broaden to width fr /n′, and at the end of the ambiguity function, where n¢ = 1, no line structure remains. If the repetition rate is increased, holding constant the pulse width and number of pulses in the train, to is decreased and the ambiguity volume is redistributed into a smaller number of broader lines. A decrease in pulse width, such that to is restored to its original value and n is increased, leads to a broader overall ambiguity function, with the original number of lines in frequency but with narrower and more numerous response bands along the time axis (Fig. 23.1.19c). The coherent pulse train is thus characterized by a pattern of ambiguities in range and velocity, where the unambiguous range is Ru = c/2fr and the unambiguous velocity (or blind speed interval) is vb = lfr /2. The product of these two quantities (the unambiguous area in the range-velocity plane) depends only on wavelength: Ruvb = lc/4. Resolution of Targets in Clutter The choice of the waveform is often dictated by the need to resolve small targets (aircraft, buoys, or projectiles) from surrounding clutter. The clutter power at the filter output is found by integrating the response function over the clutter region, with appropriate factors for variable clutter density, antenna response, and the inverse fourth-power range dependence included in the integrand. Signal-to-clutter ratio S/C for a target on the peak of the radar response is then given by S = C

σ Gt (0)Gr (0) | χ (0, 0)|2

∫ v ηv (θ ,φ , fd td )Gt (θ ,φ )Gr (θ ,φ ) χ ( fd , td )2 ( R /Rc )4 dv

(53)

where s = target cross section hv = clutter reflectivity R/Rc = target-to-clutter range ratio v = four-dimensional volume containing clutter The usual equations for S/C ratio are simplifications of Eq. (53) for various special cases (e.g., surface clutter, homogeneous clutter filling the beam, and so forth). Clearly, the S/C ratio is improved by choosing a waveform

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.28

RADAR PRINCIPLES 23.28

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.19 Ambiguity functions of uniform pulse trains: (a) noncoherent pulse train; (b) coherent pulse train; (c) coherent pulse train with reduced pulsewidth t, increased fr.

and filter such that c ( fd ,td) is minimized in clutter regions while maintaining a high value c(0,0) for all potential target positions. In a search radar, a two-dimensional bank of filters and range gates would be constructed to cover the intervals in doppler and delay occupied by targets, and the clutter power for each of these filters would then be evaluated using Eq. (53).

RADAR PROPAGATION In the radar equation (15), the echo signal power is seen to be proportional to pattern-propagation factor F4, which is the product of F2t for the transmit path and F2r for the receive path, and inversely proportional to La, the two-way path attenuation. The attenuation depends on the length of the path in which molecules of the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.29

RADAR PRINCIPLES RADAR PRINCIPLES

23.29

atmosphere and of clouds or precipitation are encountered, and the wavelength of the radar wave. The patternpropagation factor depends on the interaction of the wave with the underlying surface, and the antenna gains in the directions of the target and the reflected wave from the surface. Apart from the issue of echo signal strength, propagation will affect the accuracy of target position measurements. Errors will be introduced by refraction of the wave as it passes from the target to the radar, and by the multiple signal components which may reach the radar from the underlying surface as a result of reflection and diffraction. Atmospheric Attenuation. The frequency bands used for radar were selected to minimize the effects of the atmosphere while achieving adequate bandwidth, antenna gain, and angular resolution. Attenuation is introduced by air and water vapor, by rain and snow, by clouds and fog, and (at some frequencies) by electronics in the ionosphere. Clear-Air Attenuation. Attenuation in the clear atmosphere is seldom a serious problem at frequencies below 16 GHz (Fig. 23.1.20). The initial slope of the curves shows the sea-level attenuation coefficient ka in decibels per kilometer, and this coefficient is reduced as the path reaches higher altitude. Above 16 GHz, atmospheric attenuation is a major factor in system design (Fig. 23.1.21). The absorption lines of water vapor (at 22 GHz) and oxygen (near 60 GHz) are broad enough to restrict radar operations above 16 GHz in the lower troposphere to relatively short range, even under clear-sky conditions. Attenuation versus frequency for two-way paths through the entire atmosphere is shown in Fig. 23.1.22. Precipitation, Cloud, and Fog Effects. Above 2 GHz, rain causes significant attenuation, with ka roughly proportional to rainfall rate r and to the 2.5 power of frequency. The classical data on rain attenuation13,14 were based on drop-size distributions given by Ryde and Ryde,15 which gave generally accurate results, except for a 40 percent underestimate of the loss between 8 and 16 GHz, at low rainfall rates. Later data were derived by Wexler and Atlas16 from a modified Marshall-Palmer distribution.17 At high rates (100 mm/h) the loss coefficient ka /r is doubled between 8 and 16 GHz, giving better agreement with measurements and matching the measurements of Medhurst18 above 16 GHz. The Wexler and Atlas data provide the most satisfactory estimates for general use, and these were used in preparing Fig. 23.1.23. Very small water droplets, suspended as clouds or fog, can also cause serious attenuation, especially since the affected portion of the transmission path can be tens or hundreds of kilometers. Attenuation is greatest at 0°C (Fig. 23.1.24). Transmissions below 2 GHz are affected more seriously by heavy clouds and fog than by rain of downpour intensity. Water films that form on antenna components and radomes are also sources of loss. However, such surfaces can be specially treated to prevent the formation of continuous films.19,20 Apparent Sky Temperature. Associated with the atmospheric loss is a temperature term, which must be added to the radar receiver input temperature. Figure 23.1.4 showed this loss temperature T ′a as a function of frequency, for clear-air conditions. When precipitation is present, there will be additional loss along the atmospheric path, generating additional loss temperature into the antenna. For this situation, the sky temperature is calculated from total atmospheric loss La, using  Ta′ = 290  1 − 

1   + Tg Lα 

(54)

where Tg is the galactic background noise, a significant component for f ≤ 1 GHz. Ionospheric Attenuation. In the lowest radar bands, the daytime ionosphere may introduce noticeable attenuation.21 However, above the 100 MHz, this attenuation seldom exceeds 1 dB.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.30

RADAR PRINCIPLES 23.30

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.20 Atmospheric attenuation (0.2 to 15 GHz) versus range and elevation angle E. (Data from Ref. 8)

Surface Reflections The radar target scatters power in all directions, and some of this power arrives at the radar antenna via reflection from the surface. If the radar receiving antenna pattern has significant response in the direction from which these reflections arrive, the receiver will be presented with a composite (multipath) signal, in which the reflected components interfere with the direct signal, alternately adding to and subtracting from the direct signal magnitude. This will affect the detectability of the signal, and will introduce multipath error in the measurement of target position. On the transmit path, the same phenomenon will modulate the illumination of the target, affecting the magnitude of the echo but not its arrival angle.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.31

RADAR PRINCIPLES RADAR PRINCIPLES

23.31

FIGURE 23.1.21 Atmospheric attenuation (20 to 100 GHz) versus range and elevation angle E. (Data from Ref. 8)

Specular Reflection. The simple model for surface reflection applies to a flat, smooth surface (Fig. 23.1.25). Ignoring curvature of the earth, the specular reflection from this surface arrives at the radar from a negative elevation angle qr, approximately equal to the positive elevation angle qt, of the direct ray from the target: h −h  h −h r r Target elevation = θ t = sin −1  t ≈ t R  R   h +h  h +h Depression angle of reflection = θ r = ψ = sin −1  t r  ≈ t r R  R 

(55)

(56)

The depression angle from the radar is equal to the grazing angle y at the surface. The extra pathlength for the reflected ray will be

δ 0 = R (cos θ r − cos θ t ) ≈

2ht hr = 2hrθ t R

(57)

For a radar beam pointed at elevation angle qb, the voltage gain for the direct ray will be f(qt − qb), and for the reflected ray will be f(qt + qb). The reflected ray will arrive at the antenna with a relative amplitude equal

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.32

RADAR PRINCIPLES 23.32

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.22 Absorption loss for two-way transit of the entire troposphere, at various elevation angles (Ref. 8).

to the surface reflection coefficient r, resulting in an apparent antenna gain for the composite signal given by   2πδ  0 fr′(θ t ) = fr (θ t − θ b ) + ρ fr (−θ r − θ b ) exp  − j  + φ   = Fr (θ t ) f (0) λ    

(58)

where f is the phase angle of the reflection coefficient and Fr is the pattern-propagation factor for the receive path. A similar expression involving the transmit pattern ft(q ) will give the transmit pattern-propagation factor Ft. A two-way pattern-propagation factor F4 = F2t F2r is used in the radar equation, and the detection range will be directly proportional to F. As a result of the reflected signal, the coverage pattern of a search radar that has a broad elevation pattern will appear as in Fig. 23.1.25b. Near the horizon, where f(qt − qb) ≈ f(–qr − qb), r ≈ 1, and j ≈ p, the reflection lobes extend the coverage to twice the free-space value, while the nulls give zero signal. The nulls will appear at angles such that  λ  sinθ n = i   ≈ θ n , i = 0, 1, 2, . . .  4 hr 

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(59)

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.33

RADAR PRINCIPLES RADAR PRINCIPLES

FIGURE 23.1.23 Attenuation in rain.

23.33

FIGURE 23.1.24 Attenuation in clouds or fog (values of ka are halved at T = 18°C).

For elevated radars or long-range targets, curvature of the earth cannot be ignored, and equations that correct for curvature must be used [Ref. 3, p. 553). However, flat earth approximations are adequate for radars up to 100 m from the surface viewing targets at ranges of a few tens of km. The Fresnel reflection coefficient r0 as a function of grazing angle y, for various surface materials, is plotted in Fig. 23.1.26. For horizontal polarization, the reflection coefficient remains near −1 for grazing angles below about 10°. For vertical polarization, the coefficient goes from −1 at low angles, through a minimum near zero amplitude, and to a positive value at high angles. The angle at which the real part of the coefficient goes through zero is known as the Brewster angle, and at this angle most of the power is absorbed by the surface. Reflection from Rough Surface. Actual land and water surfaces are irregular, reducing the magnitude of the specular reflection to a value r0 rs, where the specular scattering factor rs is a function of the rms surface height deviation sh, the wavelength, and the grazing angle: 2   2πσ h sin ψ   ρs = exp  −2    λ     

(60)

The specular scattering factor is plotted in Fig. 23.1.27 as a function of normalized surface roughness. As the specular scattering coefficient decreases, diffuse reflections appear, containing the power which has been lost to the specular component. These diffuse components arrive from a broad elevation region surrounding the point of specular reflection, and much of their power may fall outside the beamwidth of the antenna. They have random fluctuations in amplitude and phase, and produce little effect on detection range. However, they are important sources of tracking error.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.34

RADAR PRINCIPLES 23.34

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.25 Effect of surface reflections: (a) geometry of specular reflection; (b) lobing pattern produced by reflections (Ref. 2).

FIGURE 23.1.26 Fresnel reflection coefficient, as a function of grazing angle, for different surfaces: (a) horizontal polarization; (b) vertical polarization.

Diffraction at the Surface. Rays that pass close to the curved earth surface, or to an obstacle rising from the surface, are affected by diffraction. Smooth-sphere diffraction modifies the pattern-propagation factor at elevation angles less than qn/2, where the pathlength difference between direct and reflected rays is less than l/4. The calculation of F for such paths is described in [Ref. 3, pp. 297–302] and in Sec. 12 of this handbook. Diffraction over obstacles such as trees, ridges, or fences will follow the knife-edge diffraction theory. Figure 23.1.28 shows curves for smooth-sphere and knife-edge diffraction.

Tropospheric Refraction The refraction index of the troposphere, for all radar frequencies, can be expressed in terms of a deviation from unity in parts per million, or refractivity = (n − 1) × 10 6 = 77.6  P + 4810 p  N= T  T 

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(61)

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.35

RADAR PRINCIPLES RADAR PRINCIPLES

23.35

FIGURE 23.1.27 Specular scattering factor versus normalized surface roughness.

where T = temperature in kelvins P = total pressure in millibars p = partial pressure of water-vapor component n = refractive index Dry air at sea level can have a value as low as N = 270, but normal values lie between 300 and 320. The Central Radio Propagation Laboratory (CRPL) of the National Bureau of Standards (now the National Oceanic and Atmospheric Administration) established a family of exponential approximations to the variation in refractivity with altitude for the normal atmosphere, in which the average U.S. conditions are represented by N(h) = 313.0 exp (–0.14386h)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

(62)

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.36

RADAR PRINCIPLES 23.36

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.28 Diffraction for paths near surface: (a) smooth-sphere diffraction; (b) diffraction over obstacle.

where h is the altitude in km above sea level. The velocity of wave propagation is 1/n times the vacuum velocity, including an extra time delay in radar ranging and causing radar rays to bend downward relative to the angle at which they are transmitted. Figure 23.1.29 shows, on an exaggerated scale, the geometry of tropospheric refraction. For surveillance radars, the effects of refraction are adequately expressed by plotting ray paths as straight lines above a curved earth whose radius ka is 4/3 times the true earth’s radius: a = 6.5 × 106 m, ka = 8.5 × 106 m.

FIGURE 23.1.29 refraction.

Geometry of tropospheric FIGURE 23.1.30 Low-angle ducting effect (Ref. 2).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.37

RADAR PRINCIPLES RADAR PRINCIPLES

23.37

FIGURE 23.1.31 Ionospheric errors in range and elevation angle, for different target altitudes: (a) range error versus frequency; (b) elevation angle error versus frequency.

A special problem can arise when the ray is transmitted at an elevation below 0.5° into an atmosphere whose refractivity has a rapid drop, several times greater than the standard 45 N-units per km. Under those conditions, the ray can be trapped in a surface duct (Fig. 23.1.30) or in an elevated duct bounded by layers of rapidly decreasing N. The result is a great increase in radar detection range for targets within the duct (and for clutter) at the expense of coverage just above the ducting layer. Although there is some leakage of energy through the top of the duct, increasing at lower frequencies, the duct will usually trap all radar frequencies sufficiently to create a gap just above the horizon, through which targets can pass undetected.

Ionospheric Refraction The refractivity of the ionospheric at radar frequencies is given by 2

N i = (n − 1) × 10 6 = −

40 N e −1  f  × 10 6 =  c  × 10 6 2 2  f f

(63)

where Ne is the electron density per m3 and fc is the critical frequency in hertz ( fc ≈ 9 N e ) Since fc seldom exceeds 14 MHz, the refractivity at 100 MHz is less than 104 N-units, and above 1 GHz it does not exceed 100 N-units. Figure 23.1.31 plots the errors in range end elevation angle for normal ionospheric conditions for targets at different altitudes. Ionospheric errors are not generally significant for radars operating in the gigahertz region, but can dominate the analysis of tracking error in the 200- and 400-MHz bands.

SEARCH-RADAR TECHNIQUES A search radar is one that is used primarily for the detection of targets in a particular volume of interest. A surveillance radar is used to maintain cognizance of selected traffic within a selected area, such as an airport terminal area or air route.7 The difference is terminology implies that the surveillance radar provides for the maintenance of track files on the selected traffic, while the search radar output may be simply a warning or one-time designation of each target for acquisition by a tracker. There is no significant difference in radar characteristics between the two uses, but the requirements for detection probability and search frame time may lead to different operating modes, scan patterns, and maximum ranges. The following discussions of search radar are equally applicable to surveillance systems. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.38

RADAR PRINCIPLES 23.38

NAVIGATION AND DETECTION SYSTEMS

Search Beams and Scan Patterns Search radars are described as two-dimensional (2D) when resolution and measurement takes place in range and azimuth coordinates, or three-dimensional (3D) when the elevation coordinate is also measured. Beam shapes and scan patterns for the two types are shown in Fig. 23.1.32. The first four parts of this figure represent 2D radars that scan only in azimuth: (a) Horizon scan, using a narrow beam, fixed in elevation at the horizon (b) Fan beam, using a broad beam whose lower edge is at the horizon

FIGURE 23.1.32 Basic types of search beams and scans.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.39

RADAR PRINCIPLES RADAR PRINCIPLES

23.39

(c) Cosecant-squared beam, similar to the fan beam but with extended coverage in elevation above the main beam, at reduced range (d ) Inverted csc2 beam, for coverage to the surface from an airborne radar The last three parts of the figure represent 3D radars: (e) Elevation-scanning pencil beam, used at fixed azimuth for height-finding radar and combined with azimuth scan for volume 3D coverage; ( f ) Stacked beams in elevation, scanning in azimuth for volume 3D coverage; (g) Raster scan of a sector in both azimuth and elevation, often used in the search mode of a multifunction array radar. Whatever the type of scan, the pulsed search radar will resolve and measure in range, for each beam position, and may also use doppler processing to resolve in radial velocity. The CW search radar may or may not resolve in range, as well as in radial velocity, depending on the modulation applied to the carrier.

Search-Radar Detection Process Early search radars depended entirely on cathode-ray-tube displays with human operations for target detection, and this process remains one of the most efficient and adaptable. The curves for detection probability (Fig. 23.1.8) may be applied to the human operator if a suitable miscellaneous signal processing loss component for the operator, Lxo≈ 4 dB, is included in calculation of Dx for use in the radar equation. When the operator is fatigued or distracted, even larger losses will be encountered. Electronic and digital integrators and automatic target detector circuits are included in many modern search radars to ensure more consistent performance than can be provided by human operators. The performance of such systems is not necessarily better than that of an alert operator, especially in the presence of clutter and interference or jamming. In order to hold the number of false alarms below the level which would overload the subsequent data processing and tracking, it is essential that the automatic detector system include constant-false-alarm-rate (CFAR) circuits to adapt the threshold to actual interference levels at the receiver-processor output. Typical CFAR techniques are shown in Fig. 23.1.33. The circuits (a) and (b) provide averaging of surrounding resolution cells in the frequency-domain; (c) averages in the time domain, over range cells surrounding the detection cell; (d) averages in the angle domain, over sectors each side of the antenna main lobe. Use of CFAR detection processing leads to a loss, which must be included as a component of miscellaneous signal processing loss Lx. This loss may be estimated from Fig. 23.1.34. The CFAR ratio is defined as the ratio of the negative exponent x of false-alarm probability (e.g., x = 6 for Pf = 10–6) to the number of independent interference samples, me, averaged in setting the threshold. Moving-Target Indication A moving-target indicator (MTI) is a device that limits the display of radar information primarily to moving targets. The sensitivity to target motion is usually provided through their doppler shifts, although area MTI systems have been built, which cancel targets on the basis of overlap of their signal envelopes in both range and angle. In the usual pulse-amplifier coherent MTI system (Fig. 23.1.35), two cw oscillators in the radar are used to produce a phase and frequency reference for both transmitting and receiving, so that echoes from fixed targets have a constant phase at the detector, over the train of pulses received during a scan. These echoes will be canceled, leaving at the output only those signals whose phase varies from pulse to pulse as a result of target motion. Coherent MTI can also be implemented with a pulsed oscillator transmitter (Fig. 23.1.36), in which the coherent oscillator at i.f. is locked in phase to a downconverted sample of each transmitted pulse. Although the transmitted and received signals are noncoherent, the phase-detected output is coherent and clutter components can be canceled. Both systems attenuate targets in a band centered on zero radial velocity, the depth and width of the rejection notch depending on the design of the canceler and the stability of the received signals. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.40

RADAR PRINCIPLES 23.40

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.33 CFAR techniques: (a) guard-band system; (b) Dicke fix system; (c) range-averaged AGC; (d ) sidelobe blanker.

Two variations on the coherent MTI are available for rejection of clutter with nonzero radial velocity. In the clutter-locked MTI, the average doppler shift of a given volume of clutter is measured and used to control an offset frequency oscillator in the receiver, shifting the received clutter components into the rejection notch. Short- or long-term averages may be used to obtain rapid adaptation to varying clutter velocity (as in weather clutter) or better rejection of selected parts of a complex clutter background. The alternative is noncoherent MTI, in which the clutter surrounding a target provides the phase reference with which the target signal is mixed to produce a baseband signal having target doppler shift. Although simpler to implement,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.41

RADAR PRINCIPLES

FIGURE 23.1.34 Universal curve for CFAR loss (Ref. 23).

FIGURE 23.1.35 Coherent MTI system.

FIGURE 23.1.36 Coherent-on-receive MTI system.

23.41 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.42

RADAR PRINCIPLES 23.42

NAVIGATION AND DETECTION SYSTEMS

noncoherent MTI does not cancel as completely and may lose target signals when the clutter is too small to provide a reference. The MTI canceler is designed to pass as much of the target spectrum as possible while rejecting clutter. Since search-radar MTI must cover many range cells without loss of resolution, canceling filters are implemented with delay lines, with multiple range gates feeding bandpass filters, or with range-sampled digital filters which perform these functions. The response of several typical cancelers is shown in Fig. 23.1.37. A wide variety of response shapes is available through the use of feedback and multiple, staggered repetition rates.24 In particular, through proper use of stagger (Fig. 23.1.37c), it is possible to maintain detection of most targets with nonzero radial velocities, even those which would fall in one of the blind speeds vbj (ambiguous rejection notches) of an MTI with a single repetition rate: vbj = j

λ fr 2

j = ± 0, 1, 2, . . .

(64)

The MTI radar system must transmit, in each beam position, at least two pulses (for a single-delay canceler), and when feedback is used the pulses in each beam must extend for the duration of significant impulse response of the filter.

Performance of MTI The basic measure of MTI performance is the MTI improvement factor I, defined by I=

(S/ N )out (S/ N )in

all vt

This is equal to the clutter attenuation when the targets are distributed uniformly over one or more blind-speed intervals. The basic relationships between I and parameters of the radar and clutter can be expressed in terms of the ratio of rms clutter spread to repetition frequency or blind speed: z=

2πσ f fr

=

2πσ v 2πσ v = λ fr vbl

(65)

where s f = standard deviation of clutter power spectrum in hertz fr = repetition rate sv = standard deviation in m/s vbl = first blind speed from Eq. (64) For a scanning beam viewing motionless clutter, motion of the beam induces a velocity spread such that z = 2 ln 2

ω 1.665 frθ 3 n

(66)

In general, the rms spread must be calculated using the rms sum of components due to scanning, internal motion of clutter, radar platform motion, and instabilities in the radar. Another term describing MTI radar performance is subclutter visibility (SCV), which is the maximum input clutter-to-signal ratio at which target detection can be obtained: SCV =

I Dxc

(67)

where Dxc is the clutter detectability factor (output signal-to-clutter ratio required for detection). Depending on the integration gain that follows the MTI, Dxc may be as large as 100 (or 20 dB), or as small as unity (0 dB).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.43

RADAR PRINCIPLES

FIGURE 23.1.37 Frequency response of MTI filters: (a) single- and double-delay without feedback; (b) double-delay with feedback; (c) double- and triple-delay with staggered prf and feedback (Ref. 24).

23.43 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.44

RADAR PRINCIPLES 23.44

NAVIGATION AND DETECTION SYSTEMS

In phased array radar, the beam may be scanned in discrete steps over the coverage volume without spreading the clutter spectrum. The duration of each beam dwell then limits the allowable impulse response time of the filter. In many such cases, three-pulse bursts are used to support a double-delay canceler without feedback. Stagger to avoid blind speeds may be applied within each dwell, on a pulse-to-pulse or burst-to-burst basis, or on a dwell-to-dwell basis. An important consideration in step-scanning systems is the possible presence of clutter beyond the unambiguous range of the waveform. Such clutter cannot be canceled unless extra pulses ( fill pulses) are included in the dwell to produce identical clutter inputs on those pulse repetition intervals processed in the MTI filter. Pulse-to-pulse stagger may not be used when clutter beyond the unambiguous range is to be canceled, and burst-to-burst stagger waveforms must include fill pulses in each burst, increasing the required dwell time. When an average is taken over all target velocities, the MTI system does not change the signal-to-noise ratio (SNR). However, for given average SNR, the probability of detection may be severely degraded by several factors. First, the number of independent noise samples available for integration is reduced because successive outputs on successive are not independent at the canceler output. The effective number of samples integrated is (2/3)n for a single-delay canceler, and (1/2)n for double-delay, where n is the number of pulses per beamwidth. The number is further reduced by the number of fill pulses and the necessity to gate the canceler output after each step in scanning and each change in PRF, when burst-to-burst stagger is used. If the quadrature canceler channel is not implemented, the number of target samples is reduced by a factor of 2. For a Case 1 target viewed over a short dwell, the Rayleigh distribution is reduced to a single-sided gaussian distribution, with twice the decibel value of fluctuation loss. Finally, the clutter rejection notches in the velocity response curve present some targets from being detected, making it difficult to achieve high probabilities of detection regardless of SNR. These losses in information, expressed in terms of required increases in SNR to maintain a given detection performance, constitute MTI loss components of miscellaneous signal processing loss, often totaling 6 to 12 dB. Pulsed Doppler Radar Systems A pulsed doppler radar is a system in which targets are selected in one or more narrowband filters. For search radar, these filters must cover the velocity band occupied by targets, omitting or desensitizing filters containing clutter. As a result, the envelope of filter response will have a shape similar to that of an optimized MTI canceler (Fig. 23.1.38). To generate the narrow filter responses, a pulsed doppler radar must transmit, receive, and process a train of pulses considerably longer than required for MTI. Coherent integration is inherent in the narrow doppler filFIGURE 23.1.38 Frequency response of filter bank in ters used in pulsed doppler systems, and hence these syspulsed doppler radar. tems may operate with greater energy efficiency than MTI radars. Pulsed doppler radar can operate in any of three modes: 1. Low-PRF mode, in which target detections are intended only within the unambiguous range of the waveform (Rt max < Ru). In most cases, this mode has multiple ambiguities within the band of target velocities. 2. Medium-PRF mode, in which target detections are required at ranges and velocities beyond the unambiguous values (Rmax > Ru, vmax > vb). 3. High-PRF mode, in which target detections are intended only within the unambiguous velocity interval (vmax < wb). In most cases, this mode has multiple range ambiguities within the range interval occupied by targets. When the radar platform is moving at velocity vp, high-PRF operation requires (vtmax + 2vp) < vb, to avoid aliasing of sidelobe clutter into the target filter. Thus, airborne radars operating at X-band must typically use PRFs in excess of 100 kHz. An example of low-PRF pulsed doppler processing is the moving target detector (MTD), shown in Fig. 23.1.39. Designed for application to conventional airport surveillance radars, this processor accepts digitized baseband signals from in-phase and quadrature phase detectors. The radar transmits 24 pulses per beam position, in two 12-pulse

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.45

RADAR PRINCIPLES RADAR PRINCIPLES

23.45

FIGURE 23.1.39 Block diagram of moving target detector system (Ref. 25).

coherent processing intervals (CPIs). In each CPI, after two fill pulses, a 10-pulse burst is applied to the tripledelay cancelers, producing an eight-pulse burst for subsequent doppler filtering in the discrete Fourier transform. A parallel zero-velocity channel feeds a clutter map through a recursive filter, averaging over several scans of the antenna. The eight filter channels are passed to range-cell averaging CFAR thresholds that desensitize those filters having range-extensive clutter. Blind speeds can be eliminated by the burst-to-burst PRF diversity used in the MTD, provided that the clutter spectrum is not excessively wide. The two sets of filter response curves are shown in Fig. 23.1.40, along with a typical rain clutter spectrum. In this example, the rain clutter will desensitize filters 6, 7, and 0 at both PRFs. The target aliased into filters 6 and 7 at one PRF, appears in filter 5 at the other PRF, ensuring detection. If the clutter spectrum expands to cover more than about 20 percent of the blind speed interval, loss of targets may be expected. An airborne high-PRF doppler processor is shown in Fig. 23.1.41. The mainlobe clutter in this example appears just below the platform velocity, corresponding to a beam position displaced from the velocity vector. Sidelobe clutter extends from +vp to −vp, with enhancement at zero velocity where the clutter is viewed at vertical incidence (the altitude line). To reduce the dynamic range of digital processing, analog filters pass only velocities between vp and vb − vp. Within the passband of the analog filters, many doppler filters are formed, usually through FFT processing. Because the high-PRF radar normally has only a few range cells within the pulse repetition interval, CFAR will be based on doppler filter averaging. If a medium-PRF mode is included in the airborne radar, a combination of doppler and range cell averaging may be used. The requirement for fill pulses is the same in pulsed doppler radar as in MTI, to ensure that the clutter input has reached steady state before coherent processing begins. However, as PRF increases into the medium- or high-PRF region, the number of such pulses may become very large. The basic requirements is that the time during which the initial transient must be gated out of the processor, after each change in PRF or beam position, is the delay time for the most distant significant clutter sources.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.46

RADAR PRINCIPLES 23.46

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.40 MTD filter responses and clutter spectrum (Ref. 25).

CW Radar A cw transmissions has no velocity ambiguity, and so cw radar equipment can be designed with a broad clutter notch, providing up to 130 dB rejection of fixed and moving clutter. Coherent integration of target signals in selected doppler bands is also provided by a single set of narrowband filters, rather than the multiple sets required

FIGURE 23.1.41 High-PRF airborne radar spectrum and processor (Ref. 26).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.47

RADAR PRINCIPLES RADAR PRINCIPLES

23.47

for range-gated operation in pulsed systems. Three problems, however, restrict the usefulness of cw radar for search: 1. Isolation of receiver from transmitter. Direct feed-through of transmitter power to the receiver must be minimized, requiring separate antennas in high-power systems and careful design in all systems to avoid receiver saturation. 2. Magnitude of short-range clutter echo. The echo power received from clutter in a cw radar is proportional to the integrated product of the reflectivity, antenna gain factors, and (range)–4 over the common volume of the transmitting and receiving beams. In both volume and surface clutter, the echo power is dominated by the clutter at the shortest range in the common volume, and the effective clutter cross section is a function of beamwidth and range Rc to the point where the beams substantially overlap. The required clutter improvement for a cw radar may therefore be very high because of the (R/Rc)4 term, where R is target range. 3. Transmitter noise. Both the direct feed-through from transmitter to receiver and the echoes from short-range clutter will contain random noise components from the transmitter. Special circuits may be designed to cancel the direct feed-through and low-frequency components of reflected noise, but the higher-frequency components will appear with phase shift from the range delay and cannot be canceled completely. Subclutter visibility in cw systems is generally controlled by these noise components.

TRACKING AND MEASUREMENT Detection of a target within a given resolution cell implies at least coarse measurement of target coordinates, to the value describing the center of the cell in each coordinate. However, measurements can be made with much greater accuracy by interpolating target position within the resolution cell. This process is normally carried out in both search and tracking radars, with emphasis on high accuracy in the tracking case. The Basic Process of Measurement The basic process by which target position is interpolated within a resolution cell involves the formation of two offset response channels in the coordinate of interest (Fig. 23.1.42). The difference ∆ and sum Σ of these two channels are formed, and the ratio ∆/Σ provides a measure of the target displacement from the equisignal axis. The two channels may be formed either simultaneously or sequentially. Simultaneous channel formation is usual in range and doppler measurement, as well as in monopulse angle measurement. Sequential channel formation is often used for angle measurement, where a second antenna beam and receiving channel would be relatively expensive. The two outputs are stored over one or more switching cycles, permitting the ∆ and Σ channels to be generated (Fig. 23.1.43a). A convenient implementation of the ratio process is to apply slow automatic gain control (AGC) to the common receiving channel (Fig. 23.1.43b). In the case of simultaneous channel formation (Fig. 23.1.43c), the AGC operates as a closed loop in the Σ channel, and open-loop in the ∆ channel. Whichever method is used, the normalized output ratio ∆/Σ can be calibrated in terms of target position and used either to correct the output data relative to the axis position or to close a tracking loop. Measurement Accuracy The rms error in measurement sz, caused by thermal noise, can be expressed in terms of the half-power width z3 of the resolution cell, the slope constant kz for the measurement system, and the signal-to-noise energy ratio E/N0 in the Σ channel of the system:

σθ =

θ3

(68)

km 2 E/ N 0

When other types of interference are present, the ratio E/N0 may be replaced by E/I0∆, where I0∆ is the spectral density of interference in the ∆ channel.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.48

RADAR PRINCIPLES 23.48

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.42 Basic process of measurement: (a) two displaced response channels in coordinate z; (b) difference ∆ of two channels; (c) sum Σ of two channels; (d) normalized ratio ∆/Σ. (Continued)

Ideal estimators can be characterized8 in terms of their slope constants: Angle measurement: km = Lq3/l = 1.61 (for uniform illumination of aperture) Time delay measurement: kt = bt3 = 1.61 (for rectangular spectrum) Frequency measurement: kf = aB3 = 1.61 (for rectangular pulse) Coherent frequency measurement: kf = af B3f = 1.61 Here L = rms aperture with = (p/ 3 )w for rectangular aperture of width w, b = rms spectral width = (p/ 3 )B for rectangular spectrum of width B, a = rms time duration = (π/ 3 )t for rectangular pulse of width t and af = rms time duration = (p/ 3 )tf for coherent pulse train received with uniform amplitude over time tf . The half-power resolution cell widths are q3 in angle, t3 in time, and B3 or B3f in frequency. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.49

RADAR PRINCIPLES RADAR PRINCIPLES

23.49

FIGURE 23.1.42 (Continued )

The ideal estimators are implemented using a Σ channel derived from uniform aperture illumination, uniform spectral weighting, or uniform weighting in time, giving (sinx)/x response (sidelobe levels −13.5 dB). This is combined with a ∆ channel derived from linear-odd illumination or weighting. The linear-odd function generates a response, which is the derivative of the Σ response (sidelobe levels −11 dB relative to Σ mainlobe). Practical Estimators. In most practical cases, the need for sidelobe control dictates use of tapered or weighted functions for aperture illumination, signal spectrum, and pulse train amplitude. The weighting tends to increase the half-power width of the resolution cell and decrease the rms width in the transform coordinate, leaving the slope constant near 1.6. In a tracker that observes the target continuously, the received signal energy E will increase without limit. For purposes of evaluating tracking error, it must be assumed that the tracker averages over a number of pulses n = frto = fr /2bn, where to is the time constant of the tracking loop and bn is its (one-sided) noise bandwidth. For multiple-target phased arrays, to is the dwell time on the target of interest. Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.50

RADAR PRINCIPLES 23.50

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.43 Sequential and simultaneous channels for measurement: (a) sequential channels with ratio circuit; (b) sequential channels with AGC; (c) simultaneous channels with AGC.

Angle Tracking In a tracking radar, the antenna pattern is usually a narrow, circular beam, directed at the selected target either continuously, with a mechanically steered antenna, or with a time-shared phased array beam. The electromechanical or electronic servo loop is controlled to minimize the angle errors, as measured by an error-sensing antenna and receiver system. In early tracking radars, the beam was scanned about the tracking axis in a narrow cone, producing amplitude modulation on signals from targets displaced from the axis. The conical-scanning radar has been largely replaced by monopulse radar, which forms the sum and difference beams simultaneously. The normalized error is formed on a single pulse (or a few pulses received within the AGC time constant),

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.51

RADAR PRINCIPLES RADAR PRINCIPLES

23.51

FIGURE 23.1.44 Block diagram of conventional monopulse tracking radar (Ref. 6).

eliminating errors caused in conical scanning radar by target amplitude scintillation. A block diagram of a typical monopulse tracker is shown in Fig. 23.1.44. The conventional system illustrated uses three identical receiver channels to process the Σ and two ∆ signals. The Σ channel is used for transmitting, and on receiving its output provides target detection, ranging, and AGC. It also serves as a phase reference for detection of the ∆ signals. These detected error signals, appearing as bipolar video pulses at the phase detector output, are smoothed to DC and applied as inputs to the angle servo channels, causing the antenna to follow the target in azimuth and elevation. Monopulse Antennas. The patterns required from a monopulse antenna consist of the Σ pattern, generally a circular pencil beam with sidelobes controlled by tapering of the aperture illumination, and a pair of ∆ patterns. A typical ∆ pattern may approximate that of Fig. 23.1.45a, calculated for a cosine illumination taper. The azimuth ∆ pattern will have two azimuth lobes of opposite polarity (0° and 180° phase), approximating that of

FIGURE 23.1.45 Typical monopulse antenna patterns, using cosine taper.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.52

RADAR PRINCIPLES 23.52

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.46 Normalized measurement slope versus sidelobe ratio for monopulse antenna in which ∆ pattern is derivative of Σ pattern.

Fig. 23.1.45b. This pattern was calculated for an illumination given by a linear-odd function multiplied by the cosine taper. The azimuth ∆ pattern in the elevation coordinate will reproduce the Σ elevation pattern. The elevation ∆ pattern will have the two lobes in elevation, and will reproduce the Σ pattern in azimuth. For an antenna with a ∆ pattern following the derivative of the Σ pattern, the beam broadening is more rapid than the decrease in slope, and the normalized slope km actually increases as sidelobe levels are reduced (Fig. 23.1.46). In most practical antennas, however, departure from the derivative ∆ pattern and various losses restrict the normalized slope to values near 1.6. The rms error of the angle estimate in the presence of thermal noise will be given by

σθ =

θ3 km 2 E/ N 0

(69)

where q3 is the half-power beamwidth of the Σ pattern, km is the normalized slope constant ≈ 1.6, and E/N0 is assumed large. This slope is the derivative of the normalized ∆ pattern, as shown in Fig. 23.1.45:  d( ∆ / Σ )  km = =    d(θ / θ 3 ) θ = 0

(70)

When the Σ signal-to-noise ratio presented to the phase-sensitive detectors of Fig. 23.1.44 is not high (S/N < 4), estimation error will be increased by detector loss, and monopulse tracking performance will be degraded. Detailed analysis of this effect is given in Ref. 3, pp. 467–472. When a reflector or lens as the radiating aperture, monopulse patterns are generated by a cluster of horns in the focal plane. The early four-horn feed clusters have given way to more advanced horn structures, using additional horns or multiple modes to generate efficient, low-sidelobe illumination functions.27 Table 23.1.4 shows the performance of five types of horn cluster, in terms of sum-channel aperture efficiency ha, measurement slopes km in both coordinates, and sidelobe ratios Gs (mainlobe-to-sidelobe power) in sum and difference patterns. These values were derived by Hannan27 for rectangular apertures, but the absolute levels of performance are unchanged when an elliptical aperture having the same maximum dimensions is used. The efficiency will be higher for the elliptical aperture because it is referred to the smaller area of the ellipse. Monopulse array systems using constrained feeds can be designed to have arbitrary Σ and ∆ illumination functions (e.g., Taylor functions for Σ and Bayliss functions28 for ∆), independently controlled in the feed networks. Such systems will have their efficiencies reduced by feed and phase shifter losses, as well as by the selected taper functions. Phased array systems using horn-fed reflectors or lenses can be described by the parameters of Table 23.1.4, with efficiencies reduced by losses in the phase shifter. Introduction of more complex

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.53

RADAR PRINCIPLES RADAR PRINCIPLES

23.53

TABLE 23.1.4 Monopulse Feed Horn Performance

Type of horn

ha

H-plane Kr√hy Km

E-plane Kr√hx Km

Simple fourhorn Two-horn dual-mode Two-horn triple-mode Twelve-horn Four hour triple-mode

0.58

0.52

1.2

0.48

0.75

0.68

1.6

0.75

0.81

0.56 0.75

0.71 0.81

Gsr. dB

Gse. dB

1.2

19

10

0.55

1.2

19

10

1.6

0.55

1.2

19

10

1.7 1.6

0.67 0.75

1.6 1.6

19 19

19 19

Feed shape

Source: D. K. Barton, and H. R. Ward, “Handbook of Radar Measurement,” copyright 1984. Reprinted by permission of Artech House, Norwood, MA.

horn structures having additional modes can produce illumination functions having low sidelobes and spillover, with gain and slope approximating those of constrained-feed arrays but with higher efficiencies due to decreased feed losses.

Range Tracking Measurement of target range is carried out by estimating the time delay td between transmission and reception of each pulse, and calculating range as R=

td c

(71)

where c is the velocity of light (c = 2.997925 × 108 m/s in vacuum, somewhat less in the atmosphere). The time delay is estimated by counting clock pulse between the times of transmission and reception of the centroid of the pulse, and correcting for any fixed delays within the radar components (transmission line, filters, and the like). The accuracy of the measurement is dependent on the accuracy with which the centroid of the received pulse, contaminated by noise and other interference, can be determined. The ideal estimation process is to identify the centroid of the received pulse by passing the signal through a matched filter, differentiating with respect to time, and stopping the counter when the derivative passes

FIGURE 23.1.47 Block diagram of optimum pulse centroid estimator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.54

RADAR PRINCIPLES 23.54

NAVIGATION AND DETECTION SYSTEMS

through zero. A block diagram of this process is shown in Fig. 23.1.47. The resulting time-delay accuracy is then given by Eq. (68) with parameters appropriate to the time coordinate:

σ1 =

1

β 2 E/ N 0

=

τ3 kt 2 E/ N 0

(72)

where b = rms signal bandwidth t3 = half-power width of processed pulse kt = slope constant ≈ 1.61 In order for the circuit of Fig. 23.1.47 to produce centroid estimates without gross error from false zero crossings on noise, the signal energy ratio E/N0 must be high enough to avoid the occurrence of false alarms at the Σ threshold prior to reception of the target pulse.

FIGURE 23.1.48 Centroid estimators for pulses in a train: (a) coherent processing; (b) noncoherent processing.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.55

RADAR PRINCIPLES RADAR PRINCIPLES

23.55

When the energy ratio of the individual pulse in a pulse train is low, time-delay estimation will normally be carried out using a tracking correlator, as shown in Fig. 23.1.48. Reception of a coherent pulse train permits the correlation to be performed prior to envelope detection (Fig. 23.1.48a). The i.f. filter H1( f ) has a broad bandwidth passing the complete spectrum of the individual pulse, providing pulse compression if that pulse contains phase modulation. The locally generated reference signals labeled h2(t) and h′2(t) are approximately matched to the pulse width, and often take the form of rectangular gate and split-gate functions as shown. The ∆ output of this system is a DC voltage proportional to the time difference between the center of the gate function and the received pulse centroid. This voltage is then normalized to the Σ amplitude and used as input to the variable time-delay generator that initiates the gates. The noise error in coherent processing is given by Eq. (72), using the energy ratio of the pulse train as integrated in the narrowband filters. When the pulse train is not coherent, the individual pulses must be passed through envelope detectors prior to narrowband filtering, as shown in Fig. 23.1.48b. The performance of the noncoherent processor, when the energy ratio of the individual pulse is low, is reduced by loss (small-signal suppression) in the envelope detector, in a way similar to that of the angle tracker.

Doppler Tracking Doppler tracking is used in coherent radars for two purposes: (1) It permits the signal to pass a narrowband filter, providing coherent integration and rejecting clutter; and (2) It provides accurate radial velocity data . on the target. A transmission at frequency f0, reflected from a target moving with radial velocity vt = R , will be received at f0 + fd. The change in frequency fd is known as the doppler shift:  2f v  c − vt  −2 f0 vt  2v vt vt2 0 t =− t fd = f0  − 1 =  1 − + 2 − ⋅⋅ ⋅ ≈ c + v c c c λ c     t

(73)

Target velocity is calculated from the measured doppler shift as vt = −

 fd c  fd f2 f c f λ + d 4 − ⋅⋅⋅ ≈ d = − d 1 − 2 f0  2 f0 4 f0 2 f 2  0

(74)

In most cases, the signal bandwidth is small enough relative to f 0 so that the doppler shift can be regarded as a simple displacement of the spectrum relative to that transmitted, and the target velocity will be small enough relative to c that the approximations in Eqs. (73) and (74) are valid. The frequency accuracy of an ideal estimator is given by Eq. (68) with parameters appropriate to the frequency coordinate:

σf =

1

α 2 E/ N 0

=

B3 k f 2 E/ N 0

(75)

When a coherent pulse train is measured, the applicable bandwidth B3 in Eq. (75) is the bandwidth of each spectral line shown in Fig. 23.1.9. Figure 23.1.49 shows a block diagram of a range-gated doppler centroid estimator for use with a coherent pulse train. Two narrowband filters, displaced each side of the tracking frequency fc, are used to form Σ and ∆ channels in the frequency coordinate. The Σ i.f. signal is used as a reference to phase detect the ∆ signal, providing a DC error value that can be used to control an oscillator shifting the LO to center the signal at the frequency fc. Before measurements can be made with the accuracy implied by Eq. (75), the central line of the received spectrum must be placed in the narrowband i.f. filter-discriminator of Fig. 23.1.50. When the waveform duty factor is small, or when pulse compression is used, there will be many spectral lines present (2B/fr lines within

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.56

RADAR PRINCIPLES 23.56

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.1.49 Block diagram of frequency centroid estimator for use with coherent pulse train.

the main lobe of the spectrum, for a signal bandwidth B at repetition frequency fr). The frequency ambiguity can be resolved in one of several ways: 1. A discriminator operating at i.f. on the spectral envelope may obtain coarse frequency data adequate to place the central line in the fine discriminator bandwidth, if duty factor is not too small; 2. For systems with adequate range resolution, range data may be differentiated to obtain an estimate of vt and hence of fd; 3. Observation of the target at two or more PRFs may resolve the ambiguity. Search-Radar Measurements As the search radar beams scans (usually in azimuth), measurements can be made of target range and angle (and sometimes of radial velocity). The equations given above for tracking radar can be applied to the range and velocity measurements, using the energy ratio E/N0Lp obtained during the scan across the target. Measurements of azimuth angle are made by estimating the time (and hence the antenna angle) of the centroid of the pulse train envelope received by the radar. This time measurement may be performed with a split-gate tracker similar to that used in ranging, but operating on a time-scale matched to observation time to rather than pulse width. The target to be measured is selected by a range gate or equivalent channel of the detection processor. Alternatively, the signal may be integrated, differentiated, and the centroid estimated as the point where the derivative passes through FIGURE 23.1.50 Scintillation error in a scanning zero. radar.5 The azimuth angle error due to thermal noise will be

σθ =

θ3

(76)

2 E/ N 0

where the constant 2 in the denominator includes the slope constant, the doubling of energy ratio normally appearing in error equations, and the beamshape loss effect. This factor is essentially constant for all beam shapes and for one-way and two-way beam patterns,8 where q3 is the one-way, half-power beamwidth. Thermal noise and other interference are not the only factors affecting search radar angle accuracy. Fluctuating targets will induce a scintillation error in the estimate, independent of SNR. This error depends on

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.57

RADAR PRINCIPLES RADAR PRINCIPLES

23.57

the relationship between the target amplitude correlation time tc and the observation time to of the beam. The number of independent samples obtained is t (77) ne = 1 + o tc Figure 23.1.50 shows the dependence of scintillation error on the quantity (ne − 1) = to/tc. When this ratio is between 0.1 and 10, the error will exceed 0.05q3.

Error Analysis The total error in estimating a target coordinate is found by evaluating each of several error components, and forming the rss sum

σ = σ 12 + σ 22 + . . .

(78)

The individual error components include the thermal noise error from Eqs. (68) to (76), errors from other interference components, the scintillation error for a scanning radar, and other components due to dynamic lags in tracking, target glint, atmospheric refraction, multipath reflections, and imperfections in the radar components and circuits. Random Interference Components. In addition to thermal noise, the following interference components will cause random errors, which are estimated by substitution of E/I∆ for E/N0 in Eqs. (68) to (76): 1. Clutter, for which I∆ is estimated from the power spectral density of clutter within ±bn of the target spectral lines; additional error may result when the clutter in the Σ and ∆ channels is correlated (Ref. 3, pp. 531– 533); 2. Jamming, in the case of stand-off and escort jammers, using the jamming spectral density J∆; in the case of a self-screening noise jammer, the jamming constitutes a beacon signal and J0 /N0 is the energy ratio; 3. Multipath reflections, for which the energy ratio can be equated with the power ratio S/M∆, determined by integration of the reflected power over the ∆ pattern (Ref. 2, pp. 512–531); 4. Cross-polarized signal, using the ratio (s/sc)(∆c /Σ)2, where sc is the cross-polarized target cross section and ∆c is the cross-polarized difference channel voltage on the tracking axis (Ref. 3, pp. 415–416). Target-Induced Error Components. The error components induced by target characteristics are glint, dynamic lag, and scintillation errors (the latter in angle estimates of scanning radars only, as discussed above). Target glint results from interaction of the several scatterers of the target, changing the phase pattern of the echo in space and time. Where the scatterers are distributed uniformly over a target span L, the rms error will be approximately L/3 in that coordinate (producing an angle error L/3R radians). Dynamic lag results primarily from target accelerations, which can be followed only imperfectly by the tracking loops. The error for an acceleration at in any coordinate will be

ea =

at at = K a 2.5β n2

(79)

where Ka is the acceleration error constant of the tracking loop and bn is (one-sided) loop bandwidth. It is the requirement for adequate bandwidth to follow the accelerations that prevents the tracking radar from operating with heavy smoothing of random errors.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.58

RADAR PRINCIPLES

FIGURE 23.1.51 Elevation angle error versus target range, for different elevation angles and altitudes.

FIGURE 23.1.52 Range error versus target range, for different elevation angles and altitudes.

23.58 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.59

RADAR PRINCIPLES RADAR PRINCIPLES

23.59

Atmospheric Refraction. Major errors in elevation angle and range result from the change in refractive index of the atmosphere with altitude. The beam tends to bend downwards as it leaves the earth, causing the angle of arrival measured by the radar to be higher than the actual target elevation. The reduced velocity of propagation in the atmosphere also makes the time delay somewhat greater than would be measured in a vacuum. These bias errors in elevation and range, for a radar at sea level, are plotted in Figs. 23.1.51 and 23.1.52 as a function of target range R and elevation angle Eo. Other Sources of Error. The remaining error components that must be evaluated in order to estimate radar accuracy consist of instrumental errors in the design and construction of the radar. These are both mechanical and electrical in origin, and must be evaluated for each particular radar using detailed data on design parameter and construction or circuit tolerances.

REFERENCES Reference Books on Radar 1. Skolnik, M. I. “Introduction to Radar Systems,” 2nd ed., McGraw-Hill, 1980. 2. Barton, D. K. “Modern Radar System Analysis,” Artech, 1988. 3. Nathanson, F. E. “Radar Design Principles: Signal Design and the Environment,” 2nd ed., McGraw-Hill, 1991. 4. Rihaczek, A. “Principles of High Resolution Radar,” McGraw-Hill, 1969. 5. Barton, D. K., and H. R. Ward “Handbook of Radar Measurement,” Artech, 1984. 6. Skolnik, M. I. (ed). “Radar Handbook,” 2nd ed. McGraw-Hill, 1990. Other References on Radar Principles 7. IEEE Standard Dictionary of Electrical and Electronics Terms, ANSI/IEEE Std. 100–1988, Institute of Electrical and Electronics Engineers, 1988. 8. Blake, L. V. Prediction of Radar Range, Chap. 2 in Ref. 2. 9. Marcum, J. I. A Statistical Theory of Target Detection by Pulsed Radar, IRE Trans., April 1960, Vol. IT-6, No. 2, pp. 59–267. 10. Barton, D. K. Simple Procedures for Radar Detection Calculations, IEEE Trans., September 1969, Vol. AES-5, No. 5, pp. 837–846. 11. Swerling, P. Probability of Detection for Fluctuating Targets, IRE Trans., April 1960, Vol. IT-6, No. 2, pp. 145–268. 12. Crane, R. K. Microwave Scattering Parameters for New England Rain, MIT Lincoln Laboratory Tech. Report No. 426, October 3, 1966. 13. Goldstein, H. Attenuation by Condensed Water, Sec. 8.6 in D. E. Kerr (ed.), “Propagation of Short Radio Waves,” McGraw-Hill, 1951. 14. Gunn, K. L. S., and T. W. R. East The Microwave Properties of Precipitation Particles, Q. J. R. Meteorolog. Soc. October 1954, Vol. 80, pp. 522–545. 15. Ryde, J. W., and D. Ryde Attenuation of Centimeter and Millimeter Waves by Rain, Hail, Fogs and Clouds, General Electric Co., Report 8670, 1945. 16. Wexler, R., and D. Atlas Radar Reflectivity and Attenuation of Rain, J. Appl. Meteorol., April 1963, Vol. 2, pp. 276–280. 17. Marshall J. S., and W. M. Palmer The Distribution of Raindrops with Size, J. Meteorol., August 1948, Vol. 5, pp. 165–166. 18. Medhurst, R. G. Rainfall Attenuation of Centimeter Waves: Comparison of Theory and Measurement, IEEE Trans., July 1965, Vol. AP-13, No. 4, pp. 550–564. 19. Blevis, B. C. Losses Due to Rain on Radomes and Antenna Reflecting Surfaces, IEEE Trans., January 1965, Vol. AP13, No. 1, pp. 175–176. 20. Ruze, J. More on Wet Randomes, IEEE Trans. September 1965, Vol. AP-13, No. 5, pp. 823–824. 21. Millman, G. H. Atmospheric Effects on VHF and UHF Propagation, Proc. IRE, August 1958, Vol. 46, No. 8, pp. 1492– 1501. 22. Rihaczek, A. W. “Principles of High-Resolution Radar,” McGraw-Hill, 1969.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.60

RADAR PRINCIPLES 23.60

NAVIGATION AND DETECTION SYSTEMS

23. Gregers-Hansen, V. Constant False Alarm Rate Processing in Search Radars, Radar-73, Radar—Present and Future, IEEE Publ. No. 105, October 1973, pp. 325–332. 24. Shrader, W. W. MTI Radar, Chap. 15 in Ref. 9. 25. Cartledge, L., and R. M. O’Donnell. Description and Performance Evaluation of the Moving Target Detector, MIT Lincoln Laboratory Project Report ATC-69, March 8, 1977. 26. Schleher, D. C. “MTI and Pulsed Doppler Radar,” Artech House, 1991. 27. Hannan, P. W. Optimum Feeds for All Three Modes of a Monopulse Antenna, IRE Trans., September 1961, Vol. AP-9, No. 5, pp. 444– 461. 28. Bayliss, E. T. Design of Monopulse Antenna Difference Patterns with Low Side-lobes, Bell System Tech. J., May–June 1968, Vol. 47, No. 5, pp. 623–650.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.61

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 23.2

RADAR TECHNOLOGY Harold R. Ward

Radar development, since its beginning during World War II, has been paced by component technology. As better rf tubes and solid-state devices have been developed, radar technology has advanced in all its applications. This subsection presents a brief overview of radar technology. It emphasizes the components used in radar systems that have been developed specifically for the radar application. Since there are far too many devices for us to mention, we have selected only the most fundamental to illustrate our discussion. In this subsection, the sequence in which each subsystem is discussed parallels the block diagram of a radar system. Pictures of various radar components give an appreciation for their physical size, while block diagrams and tabular data describe their characteristics. The material for this subsection was taken by permission largely from Ref. 1, to which we refer the reader for more detail and references.

RADAR TRANSMITTERS The requirements of radar transmitters have led to the development of a technology quite different from that of communication systems. Pulse radar transmitters must generate very high power, pulsed with a relatively low duty ratio. The recent development of high-power rf transistors capable of producing a few hundred watts of peak output power at S band has made solid-state radar transmitters feasible. See Ref. 2a. A corporate combiner is needed to sum the outputs of many devices to obtain the power levels required of medium- and long-range search radars. Such solid-state transmitters offer the following advantages over tube transmitters: low-voltage operation (typically 36 V), no modulator required, and reliable operation through the redundant architecture. While the solid-state transmitter still costs more than its tube equivalent, the solid-state technology is developing more rapidly. Tube-type power oscillators and power-amplifier stages consist of three basic components: a power supply, a modulator, and a tube. The power supply converts the line voltage into dc voltage of from a few hundred to a few thousand volts. The modulator supplies power to the tube during the time the rf pulse is being generated. Although the modulation function can be applied in many different ways, it must be designed to avoid wasting power in the time between pulses. The third component, the rf tube, converts the dc voltage and current to rf power. The devices and techniques used in the three transmitter components are discussed in the following paragraphs.

RF Tubes The tubes used in radar transmitters are classified as crossed-field, linear-beam, or gridded (see Sec. 7). The crossed-field and linear-beam tubes are of primary interest because they are capable of higher peak 23.61 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.62

RADAR TECHNOLOGY 23.62

NAVIGATION AND DETECTION SYSTEMS

TABLE 23.2.1 Comparison of Modulators

Modulator

Fig.

Flexibility Mixed pulse Duty cycle lengths

Line-type: Thyratron/SCR 25–50 Limited by charging circuit Magnetic 25–52 Limited by reset modulator and charging time … Hybrid SCRLimited by reset magnetic and charging modulator time Active switch: Series switch 25–51a No limit

Pulse flatness Load arc Switch arc

Modulator voltage level

Short

No

Large PFN

Good

Ripples

No

No

Large C’s and PFN

Good

Ripples

No

Medium/ Low Low

Νο

Large C’s and PFN

Good

Ripples

No

Low

Yes

Excellent; large capacitor bank Large coupling capacitor

Good

Good

Maybe

Yes

High

Good

Good

Maybe

Yes

High

Difficult; XF Good Fair Maybe gets big; large capacitor bank Excellent; large OK, but Excellent Yes capacitor bank efficie ncy low* Excellent; large Excellent Excellent Yes capacitor bank

Yes

Mediumhigh

Yes

High



Low

Yes

Transformercoupled

Yes

Modulator anode



Νο limit

Yes

Grid



No limit

Yes

*Unless ON

Crowbar required

Long

Capacitor-coupled 25–51b Limited

25–51c Limited

Pulse-length capability

and OFF tubes carry very high peak current or unless modulator anode has high mu. After Weil, Ref. 2.

powers at microwave frequencies. Gridded tubes such as triodes and tetrodes are sometimes used at UHF and below. Since these applications are relatively few, gridded tubes will not be described here (see Sec. 7). Modulators If a pulsed radar transmitter is to obtain high efficiency, the current in the output tube must be turned off between pulses. The modulator performs this function by acting as a switch, usually in series with the anode current path. Some rf tubes have control electrodes that can also be used to provide the modulation function. There are three kinds of modulators in common use today: the line-type modulator, magnetic modulator, and active-switch modulator. Their characteristics are compared in Table 23.2.1. The line-type modulator is the most common and is often used to pulse a magnetron transmitter. A typical circuit including the high-voltage power supply and magnetron is shown in Fig. 23.2.1. Between pulses, the pulse-forming network (PFN) is charged. A trigger fires the thyratron V1, shorting the input to the PFN, which causes a voltage pulse to appear at the transformer T1. The PFN is designed to produce a rectangular pulse at the magnetron cathode, with the proper voltage and current to cause the magnetron to oscillate. The line-type modulator is relatively simple but has an inflexible pulse width. Active-switch modulators are capable of varying their pulse width within the limitation of the energy stored in the high-voltage power supply. A variety of active-switch cathode pulse modulators is shown in Fig. 23.2.2.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.63

RADAR TECHNOLOGY RADAR TECHNOLOGY

FIGURE 23.2.1 Line-type modulator.12

FIGURE 23.2.2 Active-switch cathode pulsers: (a) direct-coupled; (b) capacitor coupled; (c) transformer coupled; (d) capacitor- and transformer-coupled.2

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

23.63

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.64

RADAR TECHNOLOGY 23.64

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.3 Magnetic modulator.2

Active-switch modulators using a vacuum tube free of gas but capable of passing high current and holding off high voltage are called hard-tube modulators. The magnetic modulator, a third type of cathode pulse modulator (Fig. 23.2.3) has the advantage that no thyratron or switching device is required. Its operation is based on the saturation characteristics of inductors L2, L3, and L4. A long, low-amplitude pulse is applied to L1 to charge C1. When C1 is nearly charged, L2 saturates, and the energy in C1 is transferred resonantly to C2. The process is continued to the next stage, where the transfer time is about one-tenth that of the stage before. The energy in the pulse is nearly maintained so that at the end of the chain a short-duration high-amplitude pulse is generated.

Power Supplies and Regulators The power supply converts prime power from the ac line to dc power, usually at a high voltage. The dc power must be regulated to remove the effects of line-voltage and load variation. Protective circuitry is usually included with the high-voltage power supply to prevent the rf tube from being damaged in the event of a fault. Improper triggers and tube arcs are detected and used to trigger a crowbar circuit that discharges the energy stored in the high-voltage power supply. The crowbar is a triggered spark gap capable of dissipating the full energy of the power supply. Thyratrons, ignitrons, ball gaps, and triggered vacuum gaps are used. Stability. Radar systems with moving-target-indicator (MTI) place unusually tight stability requirements on their transmitters. Small changes in the amplitude, phase, or frequency from one pulse to the next can degrade MTI performance. In the transmitter, the MTI requirements appear as constraints on voltage, current, and timing variations from pulse to pulse. The relation between voltage variations and variation in amplitude and phase shift differs with the tube type used. Table 23.2.2 lists stability factors for the various tube types used with a high-voltage power supply (HVPS).

RADAR ANTENNAS The great variety of radar applications has produced an equally great variety of radar antennas. These vary in size from less than a foot to hundreds of feet in diameter. Since it is not feasible even to mention each of the types here, we shall discuss the three basic antenna categories, search antennas, track antennas, and multifunction antennas, after first reviewing some basic antenna principles. A radar antenna directs the radiated power and receiver sensitivity to the azimuth and elevation coordinates of the target. The ability of an antenna to direct the radiated power is described by its antenna pattern. A typical antenna pattern is shown in Fig. 23.2.4. It is a plot of radiated field intensity measured in the far field (a distance greater than twice the diameter squared divided by the wavelength from the antenna) and is plotted as a function of azimuth and elevation angle. Single cuts through the two-dimensional pattern, as shown in Fig. 23.2.5, are more often used to describe the pattern. The principle of reciprocity assures that the antenFIGURE 23.2.4 Three-dimensional pencil-beam na pattern describes its gain as a receiver as well as transpattern of the AN/FPQ-6 radar antenna. (Courtesy of mitter. The gain is defined relative to an isotropic radiator. D. D. Howard, Naval Research Laboratory)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.65

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.65

TABLE 23.2.2 Stability Factors Current or voltage change for 1% change in HVPS voltage Frequency- or phasemodulation sensitivity Magnetron

Impedance, ratio Dynamic static

 0.001  ∆I ∆f  =  to  f   I  0.003

Low-impedance∗ hard-tube modulator, or dc operation

Line-type modulator

0.05–0.1

∆I = 2%

∆I = 10–20%

Stabilotron or stabilized magnetron

 0.002   ∆I ∆f  =  to  f   I  0.0005

0.05–0.1

∆I = 2%

∆I = 10–20%

Backward-wave CFA Forward-wave CFA Klystron

∆f = 0.4 to 1° for 1% ∆I/I ∆f = 1.0 to 3.0° for 1% ∆I/I

0.05–0.1

∆I = 2%

∆I = 10–20%

0.1–0.2

∆I = 2%

∆I = 5–10%

0.67

∆E = 0.8%

∆E = 1%

0.67

∆E = 0.8%

∆E = 1%

1.0

∆I = 1%

∆I = 1%

∆φ 1 ∆ E = φ ≈ 5λ φ 2 E ∆f ≈ 10° for 1%∆E/E

TWT

Triode or tetrode

∆φ 1 ∆ E ≈ φ ≈ 15λ φ 3 E ∆f ≈ 20% for 1%∆ E/E ∆f = 0 to 0.5° for 1% ∆I/I

*A

high-impedance modulator is not listed because its output would (ideally) be independent of HVPS voltage. Source: Weil, Ref. 2.

The gain used as a defining parameter is the gain at the peak of the beam or main lobe (see Fig. 23.2.5). This is the one-way power gain of the antenna GP = 4pAha /l2

FIGURE 23.2.5 Radiation pattern for a particular paraboloid reflector antenna illustrating the main-lobe and sidelobe radiation.3

where A = area of antenna aperture (reflector area for a horn-fed reflector antenna) l = radar wavelength in units of A ha = aperture efficiency, which accounts for all losses inherent in the process of illuminating the aperture

Tapered-aperture-illumination functions designed to produce low side lobes also result in lower aperture efficiency ha and larger beamwidth q3, as shown in Fig. 23.2.6. A second gain definition sometimes used is directive gain. This is defined as a maximum radiation intensity, in watts per square meter, divided by the average radiation intensity, where the average is taken over all azimuth and elevation angles. Directive gain can be inferred from the product of the main-lobe widths in azimuth and elevation, over a wide range of tapers (including uniform). For example, an array antenna, with no spillover of illumination

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.66

RADAR TECHNOLOGY 23.66

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.6 Aperture efficiency and beamwidth as a function of highest side-lobe level for a circular aperture: (a) aperture efficiency; (b) normalized beamwidth.5

power, gives Gd = 36, 000/θ 3aθ 3e

(80a)

where q3a = 3-dB width of the main lobe in the azimuth coordinate (degrees) and q3e = 3-dB main-lobe width in the elevation coordinate (degrees). For horn-fed antennas the constant in Eq. (80a) is about 25,000. Search Antennas Conventional surface and airborne search radars generally use mechanically scanned horn-fed reflectors for their antennas. The horn radiates a spherical wavefront that illuminates the reflector. The shape of the reflector is designed to cause the reflected wave to be in phase at any point on a plane in front of the reflector. This focuses the radiated energy at infinity. Mechanically scanning search radars generally have fan-shaped beams that are narrow in azimuth and wide in the elevation coordinate. In a typical surface-based air-search radar the upper edge of the beam is shaped to follow a cosecant-squared function. This provides coverage up to a fixed altitude. Figure 23.2.7 illustrates the effect of cosecant-squared beam shaping on the coverage diagram as well as on the antenna pattern. In the horn-fed reflector the shaping can be achieved by either the reflector or the feed, and the gain constant in Eq. (80a) is reduced to about 20,000. Tracking-Radar Antennas The primary function of a tracking radar is to make accurate range and angle measurements of a selected target’s position. Generally, only a single target position is measured at a time, as the antenna is directed to follow the target by tracking servos. These servos smooth the errors measured from beam center to make pointing corrections. The measured errors, along with the measured position of the antenna, provide the target-angle information. Tracking antennas like that of the AN/FPS-16 use circular apertures to form a pencil beam about 1° wide in each coordinate. The higher radar frequencies (S, C, and X band) are preferred because they allow a smaller aperture for the same beamwidth. The physically smaller antenna can be more accurately pointed. In this section we discuss aperture configurations, feeds, and pedestals. One of the simplest methods of producing an equiphase wavefront in front of a circular aperture uses a parabolic reflector. A feed located at the focus directs its energy to illuminate the reflector. The reflected energy is then directed into space focused at infinity. The antenna is inherently broadband because the electrical path length from the feed to the reflector to the plane wavefront is the same for all points on the wavefront.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.67

RADAR TECHNOLOGY RADAR TECHNOLOGY

FIGURE 23.2.7 Elevation coverage of a cosecantsquared antenna: (a) desired elevation coverage; (b) corresponding antenna pattern desired; (c) realizable elevation coverage with pattern shown in (d); (d) actual cosecant-squared antenna pattern.8

23.67

FIGURE 23.2.8 Schematic diagram of a Cassegrain reflector antenna.9

Locating the feed in front of the aperture is sometimes inconvenient mechanically. It also produces spillover lobes where the feed pattern misses the reflector. The Cassegrain antenna shown in Fig. 23.2.8 avoids these difficulties by placing a hyperbolic subreflector between the parabolic reflector and its focus. The feed now illuminates the subreflector, which in turn illuminates the parabola and produces a plane wavefront in front of the aperture. Lenses can also convert the spherical wavefront emanating from the feed to a plane wavefront over a larger aperture. As the electromagnetic energy passes through the lens, it is focused at infinity (see Fig. 23.2.9). Depending on the index of refraction ng of the lens, a concave or convex lens may be required. Lenses are typically heavier than reflectors, but they avoid the blockage caused by the feed or subreflector. A single feed providing a single beam is unable to supply the angle-error information necessary for tracking. To obtain azimuth and elevation-error information, feeds have been developed that scan the beam in a small circle about the target (conical scan) or that form multiple beams about the target (monopulse). Conical scanning may be caused by rotating a reflector behind a dipole feed or rotating the feed itself. It has the advantage compared with monopulse that less hardware is required in the receiver, but at the expense of somewhat less accuracy. Modern trackers more often use a monopulse feed with multiple receivers. Early monopulse feeds used four separate horns to produce four contiguous beams that were combined to form a reference beam and FIGURE 23.2.9 Geometry of simple azimuth and elevation-difference beams. More recently, multimode feeds converging lenses: (a) n > 1; (b) n < 1.9 have been developed to perform this function more efficiently with fewer components (see Fig. 23.1.44). Shaft angle encoders quantize radar pointing angles through mechanical connections to azimuth and elevation axes. The output indicates the angular position of the mechanical bore-site axis relative to a fixed angular coordinate system. Because these encoders make an absolute measurement, their outputs contain 10 to 20 bits of information. A variety of techniques is used, the complexity increasing with the accuracy required. Atmospheric errors ultimately limit the number of useful bits to about 20, or 0.006 mrad. In less

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.68

RADAR TECHNOLOGY 23.68

NAVIGATION AND DETECTION SYSTEMS

precise tracking applications, synchros attached to the azimuth and elevation axes indicate angular position within a fraction of a degree.

Multifunction Arrays Array antennas form a plane wavefront in front of the antenna aperture. These points are individual radiating elements which, when driven together, constitute the array. The elements are usually spaced about 0.6 wavelength apart. Most applications use planar arrays, although arrays conformal to cylinders and other surfaces have been built. Phased arrays are steered by tilting the phase front independently in two orthogonal directions called the array coordinates. Scanning in either array coordinate causes the beam to move along a cone whose center is at the center of the array. The paths the beam follows when steered in the array coordinates are illustrated in Fig. 23.2.10, where the z axis is normal to the array. As the beam is steered away from the array normal, the projected aperture in the beam’s direction varies, causing the beamwidth to vary proportionately. Arrays can be classified as either active or passive. Active arrays contain duplexers and amplifiers behind every element or group of elements; passive arrays are driven from a single feed point. Only the active arrays are capable of higher power than conventional antennas. Both passive and active arrays must divide the signal from a single transmission line among all the elements of the array. This can be done by an optical feed, a corporate feed, or a multiple-beam-forming network. The optical feed is illustrated in Fig. 23.2.11. A single feed, usually a monopulse horn, illuminates the array with a spherical phase front. Power collected by the rear elements of the array is transmitted through the phase shifters that produce a planar front and steer the array. The energy may then be radiated from the other side of the array, as in the lens, or be reflected and reradiated through the collecting elements, where the array acts as a steerable reflector. Corporate feeds can take many forms, as illustrated by the series-feed networks shown in Fig. 23.2.12 and the parallel-feed networks shown in Fig. 23.2.13. All use transmission-line components to divide the signal among the elements. Phase shifters can be located at the elements or within the dividing network.

FIGURE 23.2.10 Beam-steering contours for a planar array.

FIGURE 23.2.11 Optical-feed systems: (a) lens; (b) reflector.11

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.69

RADAR TECHNOLOGY

FIGURE 23.2.12 Series-feed networks: (a) end feed; (b) center feed; (c) separate optimization; (d ) equal path length; (e) series phase shifters.11

FIGURE 23.2.13 Parallel-feed networks: (a) matched corporate feed; (b) reactive corporate feed; (c) reactive stripline; (d) multiple reactive divider.11

23.69 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.70

RADAR TECHNOLOGY 23.70

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.14 Butler beam-forming network.11

FIGURE 23.2.15 shifter.12

Typical Reggia-Spencer phase

Multiple-beam networks are capable of forming simultaneous beams with the array. The Butler matrix shown in Fig. 23.2.14 is one such technique. It connects the N elements of a linear army to N feed points corresponding to N beam outputs. It can be applied to two-dimensional arrays by dividing the array into rows and columns. The phase shifter is one of the most critical components of the array. It produces controllable phase shift over the operating band of the array. Digital and analog phase shifters have been developed using both ferrites and pin diodes. Phase shifter designs always strive for a low-cost, low-loss, and high-power-handling capability. The Reggia-Spencer phase shifter consists of a ferrite inside a waveguide, as illustrated in Fig. 23.2.15. It delays the rf signal passing through the waveguide. The amount of phase shift can be controlled by the current in the solenoid, through its effect on the permeability of the ferrite. This is a reciprocal phase shifter that has the same phase shift for signals passing in either direction. Nonreciprocal phase shifters (where phase-shift polarity reverses with the direction of propagation) are also available. Either reciprocal or nonreciprocal phase shifters can be locked or latched in many states by using the permanent magnetism of the ferrite. Phase shifters have also been developed using pin diodes FIGURE 23.2.16 Switched-line phase bit.12 in transmission-line networks. One configuration, shown in Fig. 23.2.16, uses diodes as switches to change the signal path length of the network. A second type uses pin diodes as switches to connect reactive loads across a transmission line. When equal loads are connected with a quarter-wave separation, a pure phase shift results. When digital phase shifters are used, a phase error occurs at every element due to phase quantization. The error in turn causes reduced gain, higher side lobes, and greater pointing errors. Gain reduction is tabulated in Table 23.2.3 for typical quantizations. Figure 23.2.17 shows the rms side-lobe levels caused by phase quantization in an array of N elements. The rms pointing; error relative to 3-dB beamwidth is given by

σ θ /θ 3 ≈ 1.12 / 2m N

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.71

RADAR TECHNOLOGY RADAR TECHNOLOGY

TABLE 23.2.3 Gain Loss in a Phased Array with m-Bit Digital Phase Shifters Number of Bits, m

Gain Loss, dB

3 4 5 6 7 8

0.228 0.057 0.0142 0.00356 0.0089 0.0022

23.71

where m is the number of bits of phase quantization and N is the number of elements in array.5 Frequency scan is a simple array-scanning technique that does not require phase shifters, drivers, or beam-steering computers. Element signals are coupled from points along a transmission line as shown in Fig. 23.2.18. The electrical path length between elements is much longer than the physical separation, so that a small frequency change will cause a phase change between elements large enough to steer the beam. The technique can be applied only to one array coordinate, so that in two-dimensional arrays, phase shifters are usually required to scan the other coordinate.

MICROWAVE COMPONENTS The radar transmitter, antenna, and receiver are all connected through rf transmission lines to a duplexer. The duplexer acts as a switch connecting the transmitter to the antenna while radiating and the receiver to the antenna while listening for echoes. Filters, receiver protectors, and rotary joints may also be located in the various paths. See Section 7 for a description of microwave devices and transmission lines. A variety of other transmission-line components are used in a typical radar. Waveguide bends, flexible waveguide, and rotary joints are generally necessary to route the path to the feed of a rotating antenna. Waveguide windows provide a boundary for pressurization while allowing the microwave energy to pass through. Directional couplers sample forward and reverse power for monitoring, test, and alignment of the radar system. Duplexers The duplexer acts as a switch connecting the antenna and transmitter during transmission and the antenna and receiver during reception. Various circuits are used that depend on gas tubes, ferrite circulators, or pin diodes as the basic switching element. The duplexers using gas tubes are most common. A typical gas-filled TR tube is shown in Fig. 23.2.19. Low-power rf signals pass through the tube with very little attenuation. Higher power causes the gas to ionize and present a short circuit to the rf energy. Figure 23.2.20 shows a balanced duplexer using hybrid junctions and TR tubes. When the transmitter is on, the TR tubes fire and reflect the rf power to the antenna port of the input hybrid. On reception, signals received by the antenna are passed through the TR tubes and to the receiver port of the output hybrid.

FIGURE 23.2.17 RMS side lobes due to phase quantization.11

FIGURE 23.2.18 Simple types of frequency-scanned antennas: (a) broad-wall coupling to dipole radiators; (b) narrow-wall coupling with slot radiators.14

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.72

RADAR TECHNOLOGY 23.72

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.19 Typical TR tube.3

Circulators and Diode Duplexers Newer radars often use a ferrite circulator as the duplexer. A TR tube is required in the receiver line to protect the receiver from the transmitter power reflected by the antenna due to an imperfect match. A four-port circulator is generally used with a load between the transmitter and receiver ports so that the power reflected by the TR tube is properly terminated. In place of the TR tube pin diode switches have been used in duplexers. These are more easily applied in coaxial circuitry and at lower microwave frequencies. Multiple diodes are used when a single diode cannot withstand the required voltage or current. Receiver Protectors. TR tubes with a lower power rating are usually required in the receive line to prevent transmitter leakage from damaging mixer diodes or rf amplifiers in the receiver. A keep-alive ensures rapid ionization, minimizing spike leakage. The keep-alive may be either a probe in the TR tube maintained at a high dc potential or a piece of radioactive material. Diode limiters are also used after TR tubes to further reduce the leakage.

FIGURE 23.2.20 Balanced duplexer using dual TR tubes and two short-slot hybrid junctions: (a) transmit condition; (b) receive condition.3

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.73

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.73

FIGURE 23.2.21 Typical construction of a dissipative waveguide filter.15

Filters Microwave filters are sometimes used in the transmit path to suppress spurious radiation or in the receiver signal path to suppress spurious interference. Because the transmit filters must handle high power, they are larger and more difficult to design. Narrow-band filters in the receive path, often called preselectors, are built using mechanically tuned cavity resonators or electrically tuned YIG resonators. Preselectors can provide up to 80 dB suppression of signals from other radar transmitters in the same rf band but at a different operating frequency. Harmonic filters are the most common transmitting filter. They absorb the harmonic energy to prevent it from being radiated or reflected. Since the transmission path may provide a high standing-wave ratio at the harmonic frequencies, the presence of harmonics can increase the voltage gradient in the transmission line and cause breakdown. Figure 23.2.21 shows a harmonic filter where the harmonic energy is coupled out through holes in the walls of the waveguide to matched loads.

RADAR RECEIVERS The radar receiver amplifies weak target returns so that they can be detected and displayed. The input amplifier must add little noise to the received signal, for this noise competes with the smallest target return that can be detected. A mixer in the receiver converts the received signal to an intermediate frequency where filtering and signal decoding can be accomplished. Finally, the signals are detected for processing and display. Low-Noise Amplifiers Because long-range radars require large transmitters and antennas, these radars can also afford the expense of a low-noise receiver. Considerable effort has been expended to develop more sensitive receivers. Some of the devices in use will be described here after a brief review of noise-figure and noise-temperature definitions.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.74

RADAR TECHNOLOGY 23.74

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.22 Contribution to system noise temperature.

Noise figure and noise temperature measure the quality of a sensitive receiver. Noise figure is the older of the two conventions and is defined as Fn =

S /N at output S /N at input

where S is the signal power, N is the noise power, and the receiver input terminates is at room temperature. Before low-noise amplifiers were available, a radar’s noise figure was determined by the first mixer, which would be typically 5 to 10 dB. For these values of Fn it was approximately correct to add the loss of the waveguide to the noise figure when calculating signal-to-noise ratio. As better receivers were developed with lower noise figures, these approximations were no longer accurate, and the noise-temperature convention was developed. Noise temperature is proportional to noise-power spectral density through the relation T = N/kB where k is Boltzmann’s constant and B is the bandwidth in which the noise power is measured. The noise temperature of an rf amplifier is defined as the noise temperature added at the input of the amplifier required to account for the increase in noise due to the amplifier. It is related to noise figure through the equation T = T0(Fn − 1) where T0 = standard room temperature = 290 K. The receiver is only one of the noise sources in the radar system. Figure 23.2.22 shows the receiver in its relation to the other important noise sources. Losses, whether in the rf transmission line, antenna, or the atmosphere, reduce the signal level and also generate thermal noise. The load presented to the rf transmission line by the antenna is its radiation resistance. The temperature of this resistance Ta depends on where the antenna beam is pointed. When the beam is pointed into space, this temperature may be as low as 50 K. However, when the beam is pointed toward the sun or a radio star, the temperature can be much higher. All these sources can be combined to find the system noise temperature Ts, according to the equation Ts = Ta + (Lr − 1)Ttr + TeLr where Ta = temperature of the antenna Lr = transmission-line loss defined as ratio of power in the to power out Ttr = temperature of transmission line Te = receiver noise temperature FIGURE 23.2.23 Noise characteristics of radar front ends.

Figure 23.2.23 shows the noise temperature as a function of frequency for radar-receiver front ends. All are noisier at higher frequencies. Transistor amplifiers and uncooled

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.75

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.75

TABLE 23.2.4 Approximations to Matched Filters Optimum bandwidth-time product Pulse shape Gaussian Gaussian Rectangular Rectangular Rectangular Rectangular Rectangular Rectangular chirp

Filter Gaussian bandpass Rectangular bandpass Gaussian bandpass 5 synchronously tuned stages 2 synchronously tuned stages Single-pole filter Rectangular bandpass Gaussian

6 dB

3 dB

Energy

0.92 0.44 0.50 1.04 0.72 0.77 1.04 0.72 0.77 0.97 0.67 0.76 0.95 0.61 0.75 0.70 0.40 0.63 1.37 1.37 1.37 1.04 × 6 dB width of equivalent (sin x)/x pulse, (0.86 × width of spectrum)

Mismatch loss, dB 0 0.49 0.49 0.50 0.56 0.88 0.85 0.6

Source: Taylor and Mattern, Ref. 16.

parametric amplifiers are finding increased use in radar receivers. Transistor amplifiers have been improved steadily, with emphasis on increased operating frequency. Although the transistor amplifier is a much simpler circuit than the parametric amplifier, it does not achieve the parametric amplifier’s low noise temperature. A balanced mixer is often used to convert from rf to i.f. Balanced operation affords about 20 dB immunity to amplitude noise on the local-oscillator signal. Intermediate frequencies of 30 and 60 MHz are typical, as are 1.5 to 2 dB intermediate-frequency noise figures for the i.f. preamplifier. Double conversion is sometimes used with a first i.f. at a few hundred megahertz. This gives better image and spurious suppression. The matched filter is usually instrumented at the second i.f. frequency. This filter is an approximation to the matched filter and therefore does not achieve the highest possible signal-to-noise ratio. This deficiency is expressed as mismatch loss. Table 23.2.4 lists the mismatch loss for various signal-filter combinations when the optimum bandwidth is used. Pulse Compression Pulse compression is a technique in which a rectangular pulse containing phase modulation is transmitted. When the echo is received, the matched-filter output is a pulse of much shorter duration. This duration approximately equals the reciprocal of the bandwidth of the phase modulation. The compression ratio (ratio of transmitted to compressed pulse lengths) equals the product of the time duration and bandwidth of the transmitted pulse. The technique is used when greater pulse energy or range resolution are required than can be achieved with a simple uncoded pulse. Linear FM (chirp) is the phase modulation that has received the widest application. The carrier frequency is swept linearly during the transmitted pulse. The wide application has both caused and resulted from the development of a variety of dispersive analog delay lines. Delay-lines techniques covering a wide range of bandwidths and time durations are available. Table 23.2.5 lists the characteristics of a number of these dispersive delay lines. Range lobes are a property of pulse-compression systems not found in radar using simple cw pulses. These are responses leading and trailing the principal response and resembling antenna side lobes; hence the name range lobes. These lobes can be reduced by carefully designing the phase modulation or by slightly mismatching the compression network. The mismatch can be described as a weighting function applied to the spectrum.

Detectors Although bandpass signals on an i.f. carrier are easily amplified and filtered, they must be detected before they can be displayed, recorded, or processed. When only the signal amplitude is desired, square-law characteristics may be obtained with a semiconductor diode detector, and this provides the best sensitivity for detecting pulses in noise when integrating the signals returned from a fluctuating target. Larger i.f. signal amplitudes derive Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.76

RADAR TECHNOLOGY 23.76

NAVIGATION AND DETECTION SYSTEMS

TABLE 23.2.5 Characteristics of Passive Linear-fm Devices

T, ms

BT

f0, MHz

1 20 40 40

500 350 1000 75

200 500 300 1000

5 45 25 100

15 70 25 30

−60 −55 −40 −45

40 250 1000 1000

50 65 1.5 3

1000 1000 1000 1000

100 500 2000 5000

20 50 25 60

−50 −50 −40 −25

1000

10

2000

2000

70

−20

B, MHz Aluminum strip delay line Steel strip delay line All-pass network Perpendicular diffraction delay line Surface-wave delay line Wedge-type delay line Folded-type meander line Waveguide operated near cutoff YIG crystal

Typical spurious, dB

Typical loss, dB

Source: Farnett et al., Ref. 17.

the diode detector into the linear range, providing a linear detector. The linear detector has a greater dynamic range with somewhat less sensitivity. When still greater dynamic range (up to 80 dB) is required, log detectors are often used. Figure 23.2.24 shows the functional diagram of a log detector. The detected outputs of cascaded amplifiers are summed. As the signal level increases, stages saturate, reducing the rate of increase of the output voltage. Some signal-processing techniques require detecting both phase and amplitude to obtain the complete information available in the i.f. signal. The phase detector requires an i.f. reference signal. A phase detector can be constructed by passing the signal through an amplitude limiter and then to a product detector, where it is combined with the reference signal, as shown in Fig. 23.2.25. An alternative to detecting the amplitude and phase is to detect the in-phase and quadrature components of the i.f. signal. The product detector shown in Fig. 23.2.25 can also provide this function when the input signal is not amplitude-limited. Quadrature detector circuits differ only in that the reference signal is shifted by 90° in one detector relative to the other.

Analog-to-Digital Converters Digital signal processors require that the detected i.f. signals be encoded by an analog-to-digital converter. A typical converter may sample the detected signal at a 1-MHz rate and encode the sampled value into a 12-bit

FIGURE 23.2.24 Logarithmic detector.16

FIGURE 23.2.25 Balanced-diode detector.16

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.77

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.77

binary word. Encoders operating at higher rates have been built, but with fewer bits in their output. Encoders typically have errors that about equal the least significant bit.

EXCITERS AND SYNCHRONIZERS Exciters Two necessary parts of any radar system are an exciter to generate rf and local-oscillator frequencies and a synchronizer to generate the necessary triggers and timing pulses. The components used in exciters are oscillators, frequency multipliers, and mixers. These can be arranged in various ways to provide the cw signals needed in the radar. The signals required depend on whether the transmitter is an oscillator or a power amplifier. Transmitters using power oscillators such as magnetrons determine the rf frequency by the magnetron tuning. In a noncoherent radar, the only other frequency required is that of the local oscillator. It differs from the magnetron frequency by the i.f. frequency, and this difference is usually maintained with an automatic frequency control (AFC) loop. Figure 23.2.26 shows the circuit of a simple magnetron radar, illustrating the two alternative methods of tuning the magnetron to follow the stable local oscillator (stalo) or tuning the stalo to follow the magnetron. If the radar must use coherent detection (as in MTI or pulse doppler applications), a second oscillator, called a coherent oscillator (coho), is required. This operates at the i.f. frequency and provides the reference for the product detector. Because an oscillator transmitter starts with random phase on every pulse, it is necessary to quench the coho and lock its phase with that of the transmitter on each pulse. This is accomplished by the circuit shown in Fig. 23.2.27.

FIGURE 23.2.26 Alternative methods for AFC control.16

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.78

RADAR TECHNOLOGY 23.78

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.27 Keyed coho.16

When an amplifier transmitter is used, coho locking is not required. The transmit frequency can be obtained by mixing the stalo and coho frequencies, as shown in Fig. 23.2.28. The stalo and coho are not always oscillators operating at their output frequency. Figure 23.2.29 shows an exciter using crystal oscillators and multipliers to produce the rf and local-oscillator frequencies. Crystals may be changed to select the rf frequency without changing the i.f. frequencies. The stability required of the frequencies produced by the exciter depends on the radar application. In a simple noncoherent radar a stalo frequency error shifts the signal spectrum in the i.f. passband, and an error which is a fraction of the i.f. bandwidth can be allowed. In MTI or pulse doppler radars, phase changes from pulse to pulse must be less than a few degrees. This requirement can be met with crystal oscillators driving frequency multipliers or fundamental oscillators with high-Q cavities when sufficiently isolated from electrical and mechanical perturbation. Instability is often expressed in terms of the phase spectrum about the center frequency. Crystal oscillators driving frequency multipliers are finding increased use as stalos. A typical multiplier might multiply a 90-MHz crystal oscillator frequency by 32 to obtain an S-band signal. This source has the long-term stability of the crystal oscillators, but with degraded short-term stability. This is because the multiplier increases the phase modulation on the oscillator signal in proportion to the multiplication factor; i.e., each dobbler stage raises the oscillator sideband 6 dB. Frequency may be varied by tuning the crystal oscillator (about 0.25 percent) or by changing crystals.

Synchronizers The synchronizer delivers timing pulses to the various radar subsystems. In a simple marine radar this may consist of a single multivibrator that triggers the transmitter, while in a larger radar 20 to 30 timing pulses may

FIGURE 23.2.28 Coherent radar.16

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.79

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.79

FIGURE 23.2.29 Coherent-radar exciter.

be needed. These may turn on and off the beam current in various transmitter stages; start and stop the rf pulse time attenuators; start display sweeps; etc. Timing pulses or triggers are often generated by delaying a pretrigger with delays that may be either analog or digital. New radars are tending toward digital techniques, with the synchronizer incorporated into a digital signal processor. A diagram of the delay structure in a digital synchronizer is shown in Fig. 23.2.30. A 10-MHz clock moves the initial timing pulse through shift registers. The number of stages in each register is determined by the delay required. Additional analog delays provide a fine delay adjustment to any point in the 100-ns interval between clock pulses. The synchronizer will also contain a range encoder, in radars where accurate range tracking is required or where range data will be processed or transmitted in digital form. Range is usually quantized by counting

FIGURE 23.2.30 Digital synchronizer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.80

RADAR TECHNOLOGY 23.80

NAVIGATION AND DETECTION SYSTEMS

cycles of a clock, starting with a transmitted pulse and stopping with the received echo. Where high precision is required, the fine range bits are obtained by interpolating between cycles of the clock, using a tapped delay line with coincidence detectors on the taps.

SIGNAL PROCESSING The term signal processing describes those circuits in the signal path between the receiver and the display. The extent to which processing is done in this portion of the signal path depends on the radar application. In search radars, postdetection integration, clutter rejection, and sometimes pulse compression are instrumented in the signal processor. The trend in modern radar has been to use digital techniques to perform these functions, although many analog devices are still in use. The following paragraphs outline the current technology trends in postdetection integration, clutter rejection and digital pulse compression. Postdetection Integration Scanning-search radars transmit a number of pulses toward a target as the beam scans past. For best detectability, these returns must be combined before the detection decision is made. In many search radars the returns are displayed on a plan-position indicator (PPI), where the operator, by recognizing target patterns, performs the postdetecton integration. When automatic detectors are used, the returns must be combined electrically. Many circuits have been used, but the two most common are the video sweep integrator and binary integrator. The simplest video integrator uses a single delay line long enough to store all the returns from a single pulse. When the returns from the next pulse are received, they are added to the attenuated delay-line output. Figure 23.2.31 shows two forms of this circuit. The second (Fig. 23.2.31b) is preferred because the gain factor K is less critical to adjust. The circuit weights past returns with an exponentially decreasing amplitude where the time constant is determined by K. For optimum enhancement K = 1 − 1.56/N FIGURE 23.2.31 Two forms of sweep integrator.18

where N is the number of hits per one-way half-power beamwidth. By limiting the video amplitude into the integrator, the integrator eliminates single-pulse interference. The delay may be analog or digital, but the trend is to digital because the gain of the digital loop does not drift. The binary integrator or double-threshold detector is another type of integrator used in automatic detectors. With this integrator, the return in each range cell is encoded to 1 bit and the last N samples are stored for each range cell. If M or more of the N stored samples are 1s, a detection is indicated. Figure 23.2.32 shows a functional diagram of a binary integrator. This integrator is also highly immune to single-pulse interference.

Clutter Rejection The returns from land, sea, and weather are regarded as clutter in an air-search radar. They can be suppressed in the signal processor when the spectrum is narrow compared with the radar’s pulse repetition rate (prf). Filters that combine two or more returns from a single-range cell are able to discriminate between the desired targets and clutter. This allows the radar to detect targets with cross section smaller than that of the clutter. It also provides a means of preventing the clutter from causing false alarms. The two classes of clutter filters are moving target indicator (MTI) and pulse doppler.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.81

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.81

FIGURE 23.2.32 Binary integrator.3

MTI combines a few pulse returns, usually two or three, in a way that causes the clutter returns to cancel. Figure 23.2.33 shows a functional diagram of a digital vector canceler. The in-phase and quadrature components of the i.f. signal vector are detected and encoded. Stationary returns in each signal component are canceled before the components are rectified and combined. The digital canceler may consist of a shift register memory and a subtractor to take the difference of the succeeding returns. Often only one component of the vector canceler is instrumented, thereby saving about half the hardware, but at the expense of signal detectability in noise. A pulse doppler processor is another class of clutter filter where the returns in each range resolution cell are gated and put into a bank of doppler filters. The number of filters in the bank approximately equals the number of pulse returns combined. Each filter is tuned to a different frequency, and the passbands contiguously positioned between zero frequency and prf. Figure 23.2.34 shows a functional diagram of a pulse doppler processor. The pulse doppler technique is most often used in either airborne or land-based target-tracking radars, where a high ambiguous prf can be used, thus providing an unambiguous range of doppler frequencies. The filter bank may be instrumented digitally by a special-purpose computer wired according to the fast Fourier transform algorithm. Digital Pulse Compression Digital pulse compression performs the same function as the analog pulse compression devices described earlier, except that it is instrumented using digital technology. IF samples of a phase-coded echo are processed to produce samples of a compressed pulse with much shorter duration. Now that the digital technology is comparative in cost with the analog dispersive delay lines, its freedom from temperature variation causes it to be preferred for

FIGURE 23.2.33 Digital vector canceler.18

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.82

RADAR TECHNOLOGY 23.82

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.2.34 Typical pulse doppler processor.19

new designs. A typical functional implementation is illustrated in Fig. 23.2.35. Quadrature samples of the IF signal are digitized and stored for an interval at least as long as the transmitted pulse. The stored samples are then correlated with a set of complex weights that represent the time-inverted transmit waveform. The output data stream represents time samples of the compressed IF pulse. A second important advantage of digital pulse compression is the increased dynamic range compared with analog compression techniques. Since the analog-to-digital converter is usually the limitation on dynamic range, the gain in peak signal provided by pulse compression adds to the dynamic range available to the subsequent digital signal processing.

DISPLAYS Radar indicators are the coupling link between radar information and a human operator. Radar information is generally a time-dependent function of range or distance. Thus the display format generally uses displacement of signal proportional to time or range. The signal may be displayed as an orthogonal displacement (such as an oscilloscope) or as an intensity modulation of brightening of the display. Most radar signal presentations have the intensity-modulated type of display where additional information such as azimuth, elevation angle, or height can be presented. A common format is a polar-coordinate, or plan-position, indicator (PPI), which results in a maplike display. Radar azimuth is presented on the PPI as an angle, and range as radial distance. In a cartesian coordinate display, one coordinate may represent range and the other may represent azimuth elevation, or height.

FIGURE 23.2.35 Digital pulse compressor that correlates samples of the receive waveform with a stored replica.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.83

RADAR TECHNOLOGY RADAR TECHNOLOGY

23.83

Variations may use one coordinate for azimuth and the other coordinate for elevation and gate a selected time period or range to the display. The increasing use of processed radar data can provide further variations. In each case, the display technique is optimized to improve the information transfer to the operator. Marker signals may be inserted on the displays as operator aids. These can include fixed and variable range marks, strobes, and cursors as constant-angle or elevation traces. Alphanumeric data, tags, or symbols may be incorporated for operator information or designation of data. Cathode-Ray Tubes The cathode-ray rube is the most common display device used for radar indicators. The cathode-ray tube is used most because of its flexibility of performance, resolution, dynamic range, and simplicity of hardware relative to other display techniques. Also, the cathode-ray tube has varied forms, and parameters can be optimized for specific display requirements (see Secs. 5 and 9). Cathode-ray tubes using charge-storage surfaces are used for specialized displays (see Sec. 5). The directview storage tube is a high-brightness display tube. Other charge-storage tubes use electrical write-in and readout. Such tubes may be used for signal integration, for scan conversion so as to provide simpler multiple displays, for combining multiple sensors on a single display, for increasing viewing brightness on an output display, or for a combination of these functions.

REFERENCES 1. 1a. 2. 2a. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 21a. 21b. 21c.

Skolnik, M. I. (ed.) “Radar Handbook,” McGraw-Hill, 1970. Skolnik, M. I. (ed.) “Radar Handbook,” 2nd ed., McGraw-Hill, 1990. Weil, T. A. Transmitters, Chap. 7 in Ref. 1. Borkowski, M. T. Solid-State Transmitters, Chap. 5 in Ref. 1a. Skolnik, M. I. “Introduction to Radar Systems,” McGraw-Hill, 1980. Sherman, J. W. Aperture-Antenna Analysis, Chap. 9 in Ref. 1. Barton, D. K., and H. R. Ward “Handbook of Radar Measurement,” Prentice Hall, 1969. Ashley, A., and J. S. Perry Beacons, Chap. 38 in Ref. 1. Croney, J. Civil Marine Radar, Chap. 31 in Ref. 1. Freedman, J. Radar Chap. 14 in “System Engineering Handbook,” McGraw-Hill, 1965. Sengupta, D. L., and R. E. Hiatt Reflectors and Lenses, Chap. 10 in Ref. 1. Dunn. J. H., D. D. Howard, and K. B. Pendleton Tracking Radar, Chap. 21 in Ref. 1. Cheston, T. C., and J. Frank Array Antennas, Chap. 11 in Ref. 1. Stark, L., R. W. Burns, and W. P. Clark Phase Shifters for Arrays, Chap. 12 in Ref. 1. Kefalas, G. P., and J. C. Wiltse Transmission Lines, Components, and Devices, Chap. 8 in Ref. 1. Hammer, I. W. Frequency-Scanned Arrays, Chap. 13 in Ref. 1. Matthaei, G. L., L. Young, and E. M. T. Jones “Microwave Filters, Impedance Matching Networks, and Coupling Structures,” McGraw-Hill, 1964. Taylor, J. W., and J. Mattern Receivers, Chap. 5 in Ref. 1. Farnett, E. C., T. B. Howard, and G. H. Stevens Pulse-Compression Radar, Chap. 20 in Ref. 1. Shrader, W. W. MTI Radar, Chap. 17 in Ref. 51. Mooney, D. H., and W. A. Skillman, Pulse-Doppler Radar, Chap. 19 in Ref. 1. Nathanson, F. “Radar Design Principles: Signal Processing and the Environment,” McGraw-Hill, 1969. Berg, A. A. Radar Indicators and Displays, Chap. 6 in Ref. 1. Brookner, E. “Radar Technology,” Artech House, 1986. DiFranco, J. V., and W. L. Rubin “Radar Detection,”Artech House, 1986. Currie, N. C., and C. E. Brown (eds.) “Principles and Applications of Millimeter-Wave Radar,” Artech House, 1987.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.84

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 23.3

ELECTRONIC NAVIGATION SYSTEMS Walter R. Fried

Some form of electronic navigation system is used in virtually all types of vehicles, including commercial and military aircraft, ships, land vehicles, and space vehicles. In recent years, they have also found application in automobiles and trucks and by individual personnel, both civil and military. Electronic navigation and positioning systems are also used for surveying and for mineral, oil, or other resource applications. Typical parameters measured by electronic navigation systems are the distance and bearing from a vehicle to a known point or the present position of a vehicle in a particular coordinate system. From the knowledge of present position, the course and distance to a destination can then be computed. Most modern electronic navigation systems are based on the use of electromagnetic (radio) waves. The primary exceptions are systems using gyroscopes and accelerometers; those using optical celestial observations and those using pressure transducers. The use of radio waves has been found attractive because of their nearly constant and known speed of propagation, namely, the speed of light, which is about 3 × 108 m/s. With the knowledge of that velocity of propagation, if the time of travel of the radio signal between two points is accurately measured, the distance (range) between the points can be accurately determined. This is expressed by the equation d = ct, where d is the distance between the points, c is the speed of light, and t is the time of travel of the signal between the points. Also, measurement of the phase of the signal can be used for the determination of distance between the points, as well as relative bearing. In addition, the capability of high-frequency electromagnetic systems to generate narrow radiation beams can be useful for the measurement of the relative bearing from a vehicle to another vehicle or from a known point to a vehicle.

TYPES OF SYSTEMS Electronic navigation systems can be classified in a number of ways, both from an electronics viewpoint and from a navigational viewpoint. From an electronics viewpoint, they can be categorized as cooperative or selfcontained. The cooperative systems, in turn, are divided into point source systems and multiple source systems. Finally, the multiple source systems can be categorized as hyperbolic systems, pseudoranging systems, oneway synchronous (direct) ranging systems, and two-way (round-trip) ranging systems. From a navigational viewpoint, systems are frequently classified as positioning systems and dead-reckoning systems. Most positioning systems are cooperative systems, while most dead-reckoning systems are selfcontained. In dead-reckoning systems, the initial position of the vehicle must be known, and the system determines the distance and direction traveled from the departure point by continuous mathematical integration of velocity or acceleration. In this handbook, the system technologies are described primarily from the electronics viewpoint. In many modern electronic vehicle navigation systems, the data from cooperative and selfcontained sensors are combined, typically using Kalman filters, in order to obtain a more accurate solution for position. Such systems are called multisensor or hybrid systems.

23.84 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.85

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEMS

23.85

Cooperative Systems The two general categories of cooperative electronic navigation systems are point source systems and multiple source systems. The point source systems typically determine the vehicle’s present position by measuring the distance and bearing (azimuth) to a known source. They may determine only distance or only bearing, if these are the only desired parameters. Direction finders are examples of a bearing-only measurement. Multiple source systems determine vehicle position and in some systems also vehicle velocity, either in an absolute or a relative coordinate system. This may be accomplished by multiple ranging (multilateration), differential ranging, multiple angle measurements, multiple pseudorange measurements, or by a combination of these. These methods are also categorized as (1) range and bearing (rho-theta), (2) true ranges (rho-rho or rho-rhorho), (3) angle-angle (theta-theta), (4) differential ranging, and (5) pseudoranging. The first one results in a combination of circular and radial lines of position (LOPs), the second in circular LOPs, the third in radial LOPs, and the fourth in hyperbolic LOPs. The fifth method exhibits spherical LOPs, but they do not cross at the correct position unless the user’s time offset is determined. Therefore, its geometric (GDOP) behavior is not exactly equivalent to that of a rho-rho type system discussed later. Point Source Radio Systems. Perhaps the earliest form of a point source radio system is the direction finder. Its principle of operation is based on the use of a single transmitter source whose signal is received at two known points or elements. The direction from the vehicle to the source is determined by the measurement of the differential phase of the signals at the two points or elements. For operational convenience (notably size), it is frequently desirable to have the two receiving points close together and to use common circuitry at both measuring points. A loop antenna fulfills both of these requirements. Typically, a square loop is physically rotated until the currents in the two vertical arms of the loop are equal in amplitude and phase so that the output of the receiver is zero. The transmitter source is then located 90° from the plane of the loop. In simple loops there would be a 180° ambiguity, but this is resolved by temporarily connecting an omnidirectional antenna to the receiver input. Such direction finders are used for backup navigation and emergency homing to a station. Only a single transmitter source (beacon) and a simple rotating antenna and receiver on the vehicle are needed for operation. Lateral navigational error decreases as the transmitter source is approached, which is a property common to all point source angle-measuring systems. Another class of point source angle-measuring systems are based on the use of scanning antenna beams at the transmitter source and reception of the transmitted signal by the user vehicle receiver. For example, if a ground transmitter generates a rotating cardiod amplitude pattern at a fixed rate plus an omnidirectional reference signal, the user receiver can measure the relative phase difference between these two signals and can thereby determine the bearing to the transmitter source. The operation of the VHF omnidirectional range (VOR), which is used worldwide for short-distance navigation is based on this principle. The most common type of point-source system for range determination is based on the two-way (roundtrip) ranging principle, which is illustrated in Fig. 23.3.1. The interrogator, which may be located on the navigating vehicle or at a reference site, transmits a signal, typically a pulse (or pair of pulses), at a known time, the transmission time being stored in the equipment. The signal is received at a transponder and, after a fixed known delay, it is retransmitted toward the interrogator, is received by the interrogator’s receiver, and input to a ranging circuit. The latter measures the time difference between the original transmission time and the time of reception (less the known fixed delay), which is a direct measure of the two-way distance when multiplied by the speed of light. An important advantage of this technique is that the signal returns to the point of initial generation for the time difference measurement process. Therefore, the interrogator’s clock oscillator does not need to be highly stable, since the resulting time error due to any clock instability is only a function of the round-trip time multiplied by the clock drift, the round-trip time being very short, inasmuch as the signal travels at the speed of light. If the transponder in Fig. 23.3.1 is replaced by a passive reflector, for example an aircraft, the principle of operation illustrated is that used in primary surveillance radars, such as those used for air traffic control, as well as military ground-based and airborne radars. A second fundamental technique of range determination is a one-way (versus two-way) signal delay (time of arrival) measurement, between a transmitter source at a known location and a user receiver (Fig. 23.3.2). In this case, an accurate measurement is possible only if the transmitter oscillator (clock) and the user receiver oscillator (clock) are precisely time synchronized. If such time synchronization is not maintained, true range cannot be determined, since the exact time of transmission is not known with respect to the user’s clock time.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.86

ELECTRONIC NAVIGATION SYSTEMS 23.86

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.3.1 Two-way (round-trip) ranging.

Such precise time synchronization between two individual pices of equipment is frequently not possible at reasonable equipment cost. Therefore, no practical point-source distance measurement system based on this technique has been developed to date; however, several modern multiple source systems (e.g., PLRS, JTIDS-RelNav, and some hyperbolic systems) have modes that use one-way synchronous ranging after independent time synchronization has first been accomplished. Multiple Source Radio Systems. Many systems containing multiple transmitter sources have been developed for the determination of vehicle position. Normally, the user vehicle equipment includes a receiver or a receiver/ transmitter. There are five categories of such systems (with some implementations using combinations of these), namely (1) hyperbolic systems, (2) inverse hyperbolic systems, (3) pseudoranging systems, (4) one-way synchronous ranging (direct ranging) systems, and (5) two-way (round trip) ranging systems. The hyperbolic systems were the first to be developed and are currently in widespread use. The principle of operation is based on the use of three or more transmitter sources, which continuously or periodically transmit time synchronized signals. These may be continuous wave or pulsed signals. The minimum size chain, a triad,

FIGURE 23.3.2 One-way synchronous ranging.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.87

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEMS

23.87

FIGURE 23.3.3 Hyperbolic Navigation System; geometric effects.

usually consists of one Master and two Secondary (slave) stations (Fig. 23.3.3). The user receiver measures the time differences of arrival (TDs or TDOAs) of signals from pairs of stations. Loci of constant time differences of arrival, or (equivalently) constant differences in distance, from two stations form hyperbolic lines of position (LOPs). The point where two lines of position cross is the position of the user vehicle (Fig. 23.3.3). One major advantage of this technique is that the user needs only a receiver and the receiver does not need a high-quality time synchronized clock oscillator, since only time differences are used. Theoretically, three pairs of sources are needed for a completely unique horizontal position fix, but in practice two pairs will suffice. The “differences in distance” can be measured either in terms of differences in times of arrival (for example of pulses), differences in electrical carrier phase, or both. As depicted in Fig. 23.3.3, achievable accuracy is very much a function of the relative geometry between the sources and the user, i.e., the crossing angles of the LOPs. The smaller the crossing angle, the larger the position error. This accuracy degradation is called geometric dilution of precision (GDOP). The GDOP is essentially a multiplier on the basic time difference of LOP measurement error. A related concept, used primarily for external position location, could be called inverse hyperbolic system. In this system the vehicle carries a transmitter, which periodically transmits a signal to several receiving stations, whose positions are precisely known. The time differences of arrival (TDOAs) of the signal from the vehicle at pairs of stations is measured and the loci of constant difference in time of arrival form hyperbolic lines of positions. The point where two such lines of positions cross is the position of the vehicle. This technique has been applied to automatic vehicle location (AVL) systems. The third multiple source radio system concept, which has recently become very important, is called pseudoranging. In this concept, several transmitter sources, whose positions are made known to the user, transmit highly time synchronized signals on established system time epochs (Fig. 23.3.4). With these time epochs known to the user, the user measures the time of arrival (TOA) of each signal with respect to its own clock, which normally has some time offset from system time. The resulting range measurement (by multiplying by the speed of light) is called pseudorange (PR), since it differs from the true range, as a result of the user’s time offset. From several successive or simultaneous such TOA (pseudorange) measurements from four (or more) sources, the user receiver then calculates the three-dimensional position coordinates and its own time offset (from system time). This is accomplished by solving four simultaneous quadratic equations, involving the three known position coordinates of the sources and the four unknowns, namely, the three user position coordinates and the user’s time offset.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.88

ELECTRONIC NAVIGATION SYSTEMS 23.88

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.3.4 Pseudoranging system.

The basic solution equations for the position of a user in a pseudoranging multiple source system are given by R0i = c.TOAi = [(xu − xi)2 + (yu − yi)2 + (zu − zi)2]1/2 − c. ∆Tu

(81)

where Ri = pseudorange from the user to source i c = speed of light TOAi = time of arrival of the signal from the ith source xu, yu, zu = unknown three-dimensional components of the user position xi, yi, zi = known three-dimensional components of the source i’s position ∆ Tu = user’s clock oscillator’s time offset i = 1, 2, 3, 4 With xi, yi, zi known and transmitted to the user and four TOAs from four sources measured, the user position coordinates xu, yu, zu and the user’s time offset ∆ Tu can be determined by solving these four equations with four unknowns. In some implementations, more than four sources are used and the solution is then overdetermined. Thus, this system not only accurately determines the user’s three-dimensional position, but also system time, which can be related (in a message from the sources) to standard universal time coordinated (UTC). The operation of the satellite-based U.S. Global Positioning System (GPS) and the Russian GLONASS are based on this technique. Also, certain modes of some terrestrial systems, e.g., JTIDS-RelNav use this technique. In addition, by properly combining the known velocity of the sources (from transmitted data) and the measured Doppler shift of the signals received from the sources, the three user velocity coordinates and the user clock frequency offset (drift) can also be determined. Specifically, if the set of Eqs. (81) are differentiated, the basic solution equations for the determination of the user velocity in a pseudoranging multisource system can be obtained. In many practical implementations, the delta pseudorange rate or integrated Doppler measurement is used via a Doppler count over a short time interval. Since Eq. (81) is nonlinear, the solution is normally obtained via linearization about an assumed position and time. In order to provide high-accuracy TOA measurement capability, these pseudoranging navigation systems use wide bandwidth, spread spectrum modulation methods, notably direct sequence spreading. Propagation delay corrections are mode as required. The fourth multiple source radio system concept is based on one-way synchronous ranging or direct ranging (see Fig. 23.3.2) and the earlier discussion. The concept is used in some military systems (e.g. JTIDS-Relnav and PLRS) and in a second mode of some hyperbolic systems, e.g., Loran-C and Omega. When applied to the latter, the concept is called direct ranging. Two implementations are possible, namely, the rho-rho method or the rho-rho-rho method. The rho-rho method requires only two source transmitters, but it also requires a highly stable user receiver oscillator (clock) and precise knowledge of the time of

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.89

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEMS

23.89

transmission from the source station. A direct range is then developed to each station. The disadvantage of the rho-rho method is that a very high stability oscillator is required. The rho-rho-rho scheme is an extension of the rho-rho scheme requiring the use of at least three stations. Using three LOPs permits clock oscillator self-calibration, somewhat similar to the previously discussed pseudoranging concept and therefore leads to a less stringent clock oscillator requirement. The LOPs of both schemes are circles rather than hyperbolas, with the intersection of the circles representing the user position, thereby leading to more favorable geometry conditions. The fifth multiple source system for position determination is based on multiple two-way (round trip) true range measurements. It is therefore a direct extension of the concept depicted in Fig. 23.3.1. To obtain a completely unambiguous horizontal position fix, three two-way ranges are required; however, in most practical cases, two are sufficient. Since the LOPs are circular, the geometric accuracy behavior is generally more favorable than for hyperbolic systems.

Self-Contained Systems These electronic navigation systems are called self-contained because the navigation equipment required is located totally on the vehicle, i.e., operation of the system is not dependent on any outside sources or stations. These systems can be classified as radiating or nonradiating. The radiating systems described below are (1) the radar altimeter, (2) the airborne mapping radar (3) map matching systems, and (4) Doppler navigation radars. Nonradiating systems offer essentially complete protection against radio interference or jamming. The two systems in this category described below are inertial navigation systems and celestial navigation systems. Radar Altimeter (Ref. 1, Chapter 10). The device (also known as radio altimeter) is a small radar with an antenna on the bottom of an aircraft generating a beam toward the earth’s surface. The signal is back-scattered by the surface and received and processed by the radar, which measures the two-way (round-trip) delay experienced by the signal, thereby generating altitude above the surface. The modulation may be frequency-modulated continuous wave (FM-CW) or pulse modulation. For civil aviation use, the systems typically have a range of 0 to 2500 ft (0 to 750 m), are used for landing guidance, and are of the FM-CW type. In FM-CW systems, the carrier is frequency modulated in a triangular pattern, a portion of the transmitted signal is mixed with the received signal and the resulting beat frequency is directly proportional to altitude above terrain. In pulse radar altimeters, primarily used in military aircraft, the carrier is modulated by narrow pulses and the time of reception of the leading edge of the return pulses are measured with respect to their time of transmission. The time difference is a direct measure of the two-way distance to the ground straight below the aircraft. The frequency band of operation in both types of altimeters is 4200 to 4400 MHz. Typical accuracy performance is 2 ft or 2 percent. Military aircraft and helicopters use these altimeters for terrain avoidance. Mapping Radars (Ref. 1, Chapter 11). These radars scan the ground using specially shaped beams, which effectively map the terrain. They are used by pilots for navigation by recognizing certain terrain features, for example, rivers and bridges, and by manually or semiautomatically position fixing their navigation systems through designation on the known ground mapped objects. Synthetic aperture radars (SARs) provide particularly high resolution of the mapped terrain, making these position update functions very accurate. Terrain (Map) Matching Systems (Ref. 1 Chapter 2). The output of an airborne mapping radar or a radar altimeter can be used to generate a digital map or terrain profile which is then compared with on-board stored digital maps or terrain profiles, in order to allow a military aircraft or missile to automatically fly a prescribed track. Doppler Radar Navigation (Ref. 1, Chapter 10). Radio waves that are transmitted from a moving aircraft toward the ground and back-scattered by the ground to the aircraft experience a change of frequency, or Doppler shift, which is directly proportional to the ground speed of the aircraft and the cosine of the angle between the radiated beam center line and the velocity vector. A Doppler navigation radar consists of a transmitter, a receiver, and a frequency tracker which measures and tracks the Doppler shift of the signals in each

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.90

ELECTRONIC NAVIGATION SYSTEMS 23.90

NAVIGATION AND DETECTION SYSTEMS

of three or four antenna beams directed at steep angles toward the earth’s surface. Modern systems operate at the 13.325-GHz frequency. The transmission may be pulse modulated, frequency-modulated continuous-wave (FM-CW), or continuous wave (CW). Because the earlier, pulse modulated systems were less efficient, current systems are either of the FM-CW or CW type. The FM-CW systems transmit sinusoidally modulated signals which are mixed with the back-scattered signal in the receiver. The Doppler shift of one of the (Bessel function) sidebands of the mixed signals is measured and tracked. In the pure CW systems, the beat frequency Doppler between the transmitted and received signals are measured and tracked. Typical transmitted antenna patterns consists of four narrow beams, directed at steep angles toward the ground and are generated by planar array antennas. For the determination of three-dimensional velocity components, a minimum of three such beams are required. However, in modern systems, four beams are used because they are easily generated by planar array antennas and the redundant information can provide higher accuracy and a self-test function. If the antenna is fixed to the airframe the ground velocity components are computed by resolving the beam Doppler shifts through pitch and roll angles obtained from a vertical gyro or inertial navigation system. In order to determine vehicle present position, the velocity components in aircraft coordinates are converted to earth-referenced coordinates by resolving them about true heading obtained from a heading reference, such as a magnetic compass corrected for variation, an attitude-heading reference or an inertial navigation system. The velocity components are then integrated into distance traveled north and east from a previously known position (dead reckoning). The position accuracy of a Doppler navigation system is therefore a function of the accuracy of the heading reference. The basic velocity accuracy of modern lightweight Doppler radars is 0.2 percent or ±0.2 knot. These systems have been used by military aircraft and transoceanic airliners. Currently, Doppler radar navigation systems are widely used on military helicopters for navigation and hovering. Doppler radars have also been used for achieving lunar and planetary landings. The Doppler velocity measurement concept has been incorporated into modern airborne search and mapping radars (Ref. 1, Chapter 11) and is also used in sonar systems for measuring ship’s velocity. Inertial Navigation Systems (Ref. 1, Chapter 7). Inertial navigation equipment which is based on Newton’s second law can determine aircraft acceleration and direction of travel. From Newton’s second law, it is known that the force acting on a body is proportional to its mass and acceleration. In one implementation, acceleration may be determined by measuring the deflection of a spring attached to a known mass. If the acceleration is doubly integrated, the distance traveled can be determined. An inertial navigation system (INS), consisting of accelerometers, gyros, and processors, continuously determines position, velocity, acceleration, direction (with respect to north), and attitude (pitch and roll) of a vehicle. Since the position is obtained from doubly integrating acceleration of the vehicle from a known position, an inertial navigation system is inherently a dead reckoning system. In a good quality inertial navigation system, the accelerometers must be capable of measuring acceleration with high accuracy; e.g., a 10−4 g accelerometer bias error causes a 0.3-nmi-peak position error, where g is the magnitude of the gravitational acceleration (32.2 ft/s2). Misalignment of an accelerometer with respect to vertical causes it to read a component of the gravity vector g. Therefore, in some systems the three accelerometers (for sensing acceleration in three dimensions) are mounted on a gimbaled platform, which is stabilized by gyroscopes in order to keep the accelerometers in a horizontal plane. The drift rate of the gyroscopes must be low, since a 0.017° drift rate gives rise to a 1 nmi/h average position error. In many modern systems, the inertial sensors are strapped to the vehicle, e.g. the aircraft, and data from the gyros are used to correct the data from the accelerometers. These are called strapdown systems. The basic inertial navigation system error equations for motion over the earth give rise to a sinusoidal oscillatory behavior at the socalled Schuler frequency. The latter is derived from the square root of the ratio of the magnitude of the gravitational acceleration g and the radius of the earth. Its period is 84.4 min. As a result, the inertial position and velocity errors are oscillatory at that frequency, although increasing linearly with time, with the Schuler oscillations superimposed. Inertial navigation systems must be aligned initially as accurately as possible, and their accuracy degrades with time, even when stationary (due to gyro drift and accelerometer bias). Alignment is degraded at high latitudes (above 75°) and in a moving vehicle. The gyroscopes used in earlier systems were precision mechanical devices. Recently, ring laser gyros (RLGs) have been developed and are used in many modern inertial systems. Their operation is based on the use of an optical laser that generates two counter rotating light beams within a triangular structure. When that structure is physically rotated around the axis normal to the plane of the structure, the beat or difference frequency of the light beams is proportional to the angular rate and is sensed as an output at one corner of the structure (Sagnac effect). This results in the device being a

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.91

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEM

23.91

rate gyro. These gyros are less expensive than mechanical gyros. Another gyro based on a similar optical concept, which has been developed and shows great promise, is the fiberoptic gyro (FOG). Typical modern inertial navigation systems have position accuracies between 0.5 and 1 nmi/h. Celestial Systems, (Ref. 1, Chapter 12). These systems operate by making optical observations on celestial bodies and solving the astronomical triangle. Accurate knowledge of the position of the vehicle with respect to the local horizontal (pitch and roll) is required to obtain optimum performance with these systems. Fortunately, this information is readily available from inertial instruments or inertial navigation systems. These systems are particularly useful in bounding the long-term errors of inertial navigation systems. Previously they were used extensively on all types of ships and commercial airliners for position fixing. Currently, they are used primarily on military aircraft.

GEOMETRIC DILUTION OF PRECISION In hyperbolic and pseudoranging navigation systems, such as Loran-C and GPS, there is a significant effect on accuracy due to the relative geometric location between the user vehicle and the transmitter sources. This effect is represented by a nondimensional term called geometric dilution of precision (GDOP). GDOP is essentially a factor by which the basic range difference or ranging error is multiplied to obtain the resulting position error.

Horizontal Hyperbolic System GDOP In a two-dimensional hyperbolic navigation system such as Loran-C, using a chain of three stations (triad), the GDOP factor is given by the following equation: GDOP =

2r cos(φ1 + φ2 ) 1 1 1 + + sin φ1 sin φ2 sin(φ1 + φ2 ) sin 2 φ1 sin 2 φ2

(82)

where f1, f2 = half-angles subtended by the two station pairs r = correlation coefficient between two lines of position (LOPs) (typically assumed to be 0.5) f1 + f2 = crossing angle of the LOPs The radial standard deviation horizontal position error is the product of GDOP and the standard deviation range difference (LOP) measurement error.

Three-Dimensional Pseudoranging System GDOP For pseudoranging, satellite-based systems, such as GPS and GLONASS, which provide information on the three-dimensional components of position of the vehicle and time, the GDOP factor is given by the following equation: GDOP = [Trace (HTH)−1]1/2

(83)

where H is the measurement matrix relating the pseudorange measurements to the three user position components and the user time offset (bias) from system time, i.e., HXu = PR H is defined such that the ith row (for satellite i) of H(hi) is given by hi = (1xi 1yi 1zi − 1) and the elements 1xi, 1yi, 1zi are the unit vectors defining the direction cosines from the user position to the satellite positions; (H has

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.92

ELECTRONIC NAVIGATION SYSTEMS 23.92

NAVIGATION AND DETECTION SYSTEMS

at least four rows), Xu is the vector of the user xyz position coordinates (m), and the time offset expressed in meters, i.e., Xu = (xu, yu, zu, c∆Tu)T, ∆Tu = user time offset (seconds), c = speed of light (3 × 108 m/s). Trace (•) indicates the sum of the diagonal elements of (•). PR = vector of measured pseudoranges to the satellites, including errors. x, y, z can be the orthogonal right-handed cartesian Earth Centered Earth Fixed (ECEF) coordinates, with z from the center of the earth to the North Pole, x through the Greenwich meridian, and y orthogonal to x in the plane of the equator. The solution for xu, yu, zu, ∆Tu involves four (or more) pseudoranges and requires linearization about an estimated position. This is typically done using a minimum variance, Kalman filter, or least squares estimator and leads to pseudorange residuals, such that:

δ PR = H (δ X uT c ⋅ δ ∆Tu )

(84)

where dPR = four element pseudorange error vector dXu = user position error vector d∆Tu = user time offset error If the four (or more) pseudorange errors are statistically uncorrelated with equal, zero mean one-sigma values of sPR, the one-sigma position and time error is given by

σ x ,t = GDOP.σ PR = (σ x2 + σ y2 + σ z2 + c 2σ ∆t 2 )1/ 2

(85)

The position dilution of precision (PDOP) is computed by deleting the fourth diagonal element in the trace of the GDOP equation. The time dilution of precision (TDOP) is the square root of the fourth element of the GDOP equation. The first two diagonal elements in the trace are used to compute HDOP, and only the third diagonal element is used to compute VDOP. The ECEF coordinate residuals must be transformed to local tangent plane coordinates, in order to compute HDOP and VDOP (Ref. 1, Chapter 5).

INTERNATIONALLY STANDARDIZED SYSTEMS The systems described next are used by hundreds of thousands of vehicles throughout the world. Standardization is therefore desirable and refers principally to the radiated signal characteristics and the performance of the system. Each manufacturer and country may decide on the equipment design that best satisfies the operational requirements. Omega (Ref. 1, Chapter 4) This is a hyperbolic navigation system that provides worldwide service with a high enough data rate to be used by aircraft. It is a hyperbolic phase-comparison system using eight stations located in Norway, Liberia, Hawaii, North Dakota, La Reunion Island, Argentina, Australia, and Japan. The system operates in the 10 to 14-KHz, very low frequency (VLF) band, using a fixed transmission pattern. The overall signal format is shown in Fig. 23.3.5. Each station transmits on four common frequencies and one station-unique frequency. The signal frequencies are time-shared among the stations, so that a given frequency is transmitted by only one station at a given time. Each station transmits on one frequency at a time, for about 1 s, the cycle being repeated every 10 s. At the transmitted VLF frequencies, lane ambiguity occurs approximately every 8 n.mi., i.e., every half-wavelength. However, by using the beats between the basic frequencies, these ambiguities can be extended, e.g., to 24 n.mi for two frequencies and to 72 n.mi for three frequencies. Most modern receivers use at least three frequencies. In addition, continuous lane counting is used from a starting point, in order to avoid any lane ambiguity problems. Omega signal propagation basically takes place in the wave guide formed by the surface of the earth and the ionosphere. Most airborne receivers also use the system in the rho-rho-rho mode, backed up by 10 very low-frequency (VLF) communication transmitter stations in the 16 to 24 kHz band.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.93

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEM

23.93

FIGURE 23.2.5 Omega System Signal Transmission Format (frequencies in kilohertz). Frequencies marked † are the unique frequencies for the respective stations.

There is a marked difference in propagation time between day and night (diurnal variation), but this is predictable and amenable to computer storage, and can be applied to the solution in the receiver, as a function of time, data, and approximate user position. Also, an earth’s conductivity map can be stored in the computer. The position accuracy of Omega is 2 to 4 n. mi (3.7 to 7.4 km), 2 DRMS.

Loran C (Ref. 1, Chapter 4) This is a hyperbolic system, with coverage in the United States, the North Atlantic, the Mediterranean, and the North Pacific. The combination of low-frequency transmission and pulses provides long range and eliminates sky waves. The system uses a carrier frequency of 100 kHz and matches the individual rf cycles within each pulse, thereby gaining added resolution and accuracy. Since all stations operate on the same frequency, discrimination between chains is by the pulse-repetition frequency. A typical chain comprises a master and two secondaries about 600 n.mi from the master. At the receiver, the first three rf cycles are used to measure the signal time of arrival. At this point the pulse is about half amplitude. The remainder of the pulse is ignored, since it may be contaminated by skywave interference. To obtain greater average power at the receiver without resorting to higher peak powers, the master transmits groups of nine pulses 1000 ms apart and the secondaries transmit eight pulses, also 1000 ms apart. These groups are repeated at rates ranging from 10 to 25/s. Within each pulse, the rf phase can be varied in a code for communication purposes. At the receiver, phaselocked loops track the master and secondary signals and generate their time differences. A digital computer can provide direct readouts in latitude and longitude and steering information. There has been a tremendous growth in the use of Loran by general aviation. Position accuracy is 0.25 n.mi (0.4 km) 2 DRMS.

Decca (Ref. 1, Chapter 4) Unlike all other systems described herein, Decca is a proprietary system, with most of the stations owned by the Decca company and the receivers leased by it to the users, who are primarily in Europe. It is a continuouswave hyperbolic system operating in the 70 to 130-kHz band. It uses the same frequency band as Loran-C, but

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.94

ELECTRONIC NAVIGATION SYSTEMS 23.94

NAVIGATION AND DETECTION SYSTEMS

since it does not use pulses, it is subject to skywave contamination, resulting in reduced range. A typical chain comprises four stations, one master and three secondaries, separated by about 70 n.mi, arranged in a star configuration. Each station is fed with a signal, which is an accurately phase-controlled multiple of a base frequency f in the 14 kHz region, the master at 6f, the “red” secondary at 8 f, the “green” secondary at 9f, and the “purple” secondary at 5f. At the receiver, these four signals are received, multiplied, and phase-compared. There are about 25 Decca chains in Europe and about 20 elsewhere in the world.

Beacons As sources for shipboard and airborne direction-finders, these are the oldest and most numerous navigation aids in the world. Since the frequency bands of beacons are adjacent to the amplitude-modulation (AM) broadcast band, receivers are easily designed to serve a dual purpose and they are consequently popular with boat operators and with aircraft of all types. Direction-finding accuracy can be as good as ±3°.

Very High Frequency Omnidirectional Range (VOR) (Ref. 1, Chapter. 4) This aviation system uses the VHF band and is thus free of atmospherics and skywave contamination. It places the directional burden on the ground, rather than in the aircraft, where more extensive means can be employed to alleviate site errors. Line-of-sight limits its service area to about 200 n.mi for high-flying aircraft, and some stations are intended for only 25 n.mi service to low-flying aircraft. There are more than 1000 stations in the United States and about an equal number in the rest of the world. There are two variations, i.e., conventional VOR and Doppler VOR, with the latter providing increased site error reduction. Conventional VOR. It operates on 40 channels, 100 kHz apart, between 108 and 112 MHz, interleaved between ILS Localizer channels, and on 120 channels, spaced 50 kHz, between 112 and 118 MHz. The airborne receiver is frequently common with the airborne Localizer receiver and may use the same airborne antenna. Power output from the ground transmitter varies from 25 to 200 W, depending on antenna design and on the desired service area. The ground-antenna pattern forms a cardioid in the horizontal plane which is rotated 30 times per second. The CW transmission is amplitude-modulated by a 9960-Hz tone which is frequency modulated ±480 Hz at a rate of 30 Hz. This latter, 30-Hz “reference” tone, when extracted in the airborne receiver, is compared with the 30-Hz amplitude modulation provided by the rotating antenna. The phase angle between these two 30-Hz tones is the bearing of the aircraft with respect to north. VOR is the internationally standardized en route navigation system, widely implemented through-out the world which meets the FAA en route integrity requirements for a warning of within 10 s if there is a failure of ground equipment. The basic VOR instrumental error is ±1.4° 2-sigma. Site errors can degrade this error; this lead to the development of the Doppler VOR, described below. Doppler VOR reduces site error about tenfold by using a large-diameter antenna array at the ground station. This array consists of a 44-ft diameter circle of antennas. Each antenna is sequentially connected to the transmitter in a manner so as to simulate the rotation of a single antenna around the 44-ft diameter circle at 30 rps. The receiver sees an apparent Doppler shift in the received rf of 480 Hz at a 30 rps rate, and at a phase angle proportional to the receiver’s bearing with respect to north. This signal is therefore identical with the conventional VOR reference tone. It remains to transmit a 30-Hz AM tone, separated 9960 Hz as a reference, in order to radiate an identical signal to the conventional one, receivable in an identical receiver but benefiting from a manifold increase in ground-antenna aperture.

Distance-Measuring Equipment (DME) (Ref. 1, Chapter 4) DME is an interrogator-transponder two-way ranging system (Fig. 23.3.1). About 2000 ground stations and 70,000 pieces of airborne equipment are in use worldwide. The airborne interrogator transmits 1kw pulses of 3.5 ms duration, 30 times a second, on one of 126 channels 1 MHz apart, in the band 1025 to 1150 MHz. The ground transponder replies with similar pulses on another

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.95

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEM

23.95

channel 63 MHz above or below the interrogating channel. (This allows use of the transmitter frequency as the receiver’s local oscillator frequency if the intermediate frequency is 63 MHz). A single antenna is used at both ends of the link. In order to reduce interference from other pulse systems, paired pulses are used in both directions, their spacing being 12, 30, or 36 ms. The fixed delay in the ground transponder is 50 ms. In the airborne unit, the received signal is compared with the transmitted signal, their time difference derived, and the distance is determined and displayed. Ground transponders are arranged to handle interrogation from up to 100 aircraft simultaneously, each aircraft recognizing the replies to its own interrogation by virtue of the pulse repetition frequency being identical with that of the interrogation. Analog models require about 20 s for identity to be initially established (after which a continuous display is provided); modern digital models perform the search function in less than 1 s. The DME is nearly always associated with a VOR, the two systems forming the basis for a rho-theta area navigation (RNAV) system. Some use is being made of DME in the rho-rho mode, particularly by airlines. In this mode, the airborne interrogator scans a number of DME channels, automatically selecting two or more having the best signal and thus providing a fix of greater accuracy than would be obtained from VOR/DME. This is known as DME/DME and is thus a way of achieving a two-way ranging multiple source navigation system. DME is an international standard which, together with VOR, has been the basis for the most widely used line-of-sight rho-theta aircraft navigation system. Standard airborne DME equipment provides a two-sigma accuracy of ±0.1 n. mi (±185 m).

Tacan (Ref. 1, Chapter 4) This is a military modification of DME, using the same channels, and adding a bearing capability in the same frequency band. This results in a small ground antenna system, a property useful for ships. It is a military system used widely on aircraft carriers. The DME antenna is replaced by a rotating directional antenna generating two superimposed patterns. One of these is a cardiod rotating at 15 rps. The other is a nine-lobe pattern, also rotating at 15 rps. The squitter pulses and replies are amplitude-modulated as the antenna rotates. Reference pulses are transmitted at 15 and 135 Hz. In the aircraft, a coarse phase comparison can then be made at 15 Hz which is supplemented by a fine comparison at 135 Hz. The overall instrumental accuracy of TACAN is 0.2° two-sigma (bearing) and 0.1 n.mi. (185 m) two-sigma (distance).

Vortac (Ref. 1, Chapter 4) In countries having a common air traffic control system for civil and military users (e.g., the United States and Germany), the civil rho-theta system is implemented by the use of Tacan rather than DME for distance measurement. Tacan transponders are colocated with VOR stations, and civil aircraft get their DME service from the Tacan stations. In the United States, over 700 VORs have colocated Tacan transponders.

Instrument Landing Systems (ILS) (Ref. 1, Chapter 13) This is the internationally standardization aircraft approach and landing system, which provides a fixed electronic path to almost touchdown at major runways throughout the world. The ground equipment is made up of three separate elements: the localizer, giving left-right guidance; the glide slope, giving up-down guidance; and the marker beacons, which define progress along the approach course. Using VHF, ILS is free of atmospheric and sky-wave effects, but is subject to site effects. The localizer operates on 40 channels spaced 50 KHz apart, in the 108 to 112 MHz band, radiating two antenna patterns which give an equisignal course on the centerline of the runway, the transmitter being located at the far end of the runway. The left-hand pattern is amplitude-modulated by 90 Hz, the right-hand pattern by 150 Hz (Fig. 23.3.6a). The airborne receiver detects these tones, rectifies them, and presents a left-right display in the cockpit. The accuracy is better than ±0.1°. Minimum performance calls for the airborne meter to remain hard left or hard right to a minimum of ± 35° from the centerline, i.e., there must be no ambiguous or “false” courses within this region. More sophisticated systems exist in which a usable “back course” (with reverse

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.96

ELECTRONIC NAVIGATION SYSTEMS 23.96

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.3.6 Instrument Landing System: (a) minimum ICAO localizer pattern; (b) localizer pattern with back course and clearance; (c) glide slope pattern; (d) marker beacons (From S. H. Dodington, Electronic Engineers’ Handbook, McGraw Hill, 3d ed.).

sense) is obtained, and a separate transmitter, offset by about 10 kHz, provides “clearance,” so that no ambiguities exist throughout ±180° (Fig. 23.3.6b). The glide-slope transmitter, of about 7W power, is located at the approach end of the runway and up to about 500 ft to the side (Fig. 23.3.6c). It operates in the 329 to 335-MHz band, each channel being paired with a localized channel. In the airborne receiver, both channels are selected by the same control. Two antenna patterns are radiated, giving an equisignal course about 3° above the horizontal. The lower pattern is modulated with 150 Hz, and the upper pattern with 90 Hz. The airborne receiver filters these tones, rectifies them, and presents the output on a horizontal zero-centered meter mounted in the same instrument case as the localizer display, the two together being called a cross-pointer display. The accuracy is better than ±0.1°. Required accuracies for ILS and MLS are sometimes specified in distance (meters) at the decision height for the three landing categories described below.9 The glide slope suffers from course bends because of the terrain in front of the array, and is generally not depended on below 50 ft of altitude. In this phase of the landing maneuver, either visual observation or a radar altimeter are frequently used. Marker beacons operate at a fixed frequency of 75 MHz, and radiate about 2 W upward toward the sky with a fan-beam antenna pattern whose major axis is across the direction of flight (Fig. 23.3.6d). There is an “outer” marker about 5 n.mi from touchdown, and a “middle” marker about 3500 ft from touchdown. There can also be an “inner” marker at about 1000 ft from the touchdown threshold. Each type is modulated by audio tones which are easily recognized as the aircraft passes through their antenna pattern. Alternatively, differently colored lamps are set to light in the cockpit as each marker is passed. Categories of ILS performance have been established for different visibility conditions and the quality of the installation. These place the following minimum limits on how an aircraft may approach the touchdown point: Category I: 200-ft ceiling and 1 -mi visibility 21 Category II: 100-ft ceiling and -mi visibility 4 Category III: zero ceiling and 700-ft visibility ILS meets the FAA requirements for approach mode integrity; that is, failure of the ground equipment is evident to the pilot within 2 s.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.97

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEM

23.97

Microwave Landing System (MLS) (Ref. 1, Chapter 13) While the ILS described above has served well worldwide for over 30 years, it has been thought that requirements for the future will necessitate more channels, more flexible (curved) approach paths, and greater freedom from site effects. These can be obtained at microwave frequencies where greater antenna directivity and a wider frequency spectrum are available. Since a range of only 20 n.mi or so is needed, line-of-sight limitations pose no problem. Angular guidance is obtained from fan-shaped beams that scan the airspace, using 200 radio frequencies between 5.00 and 5.25 GHz. The perceived angle in the aircraft is proportional to the time it takes the beam to pass through the aircraft, first in one direction and then in the other. The scanning rate is 50 ms/degree. The system is thus known as a time-referenced scanning beam (TRSB) system. Use of the high microwave frequencies and the nature of the ground-based antenna patterns, as well as the possible inclusion of a high-precision version of DME, can provide higher accuracy performance than that of the ILS in some sites and a capability for curved approaches. However, in view of the great potential for using GPS for approach and landing, the U.S. FAA has decided to curtail development efforts of the MLS. In Europe, the civil aviation community, however, has shown continued interest in using MLS, in view of the unique requirements there. Also, the U.S. military services have had continued interest in microwave landing systems. The official, basic specified accuracy performance of MLS is the same as that of ILS.9

JTIDS-RelNav (Ref. 1, Chapter 6) The Joint Tactical Information Distribution System (JTIDS) is a decentralized, military, spread spectrum data communication and navigation system, using wide bandwidth phase coding, frequency hopping and time division multiple access. It operates in the 960 to 1215-MHz band. It includes a Relative Navigation (RelNav) function that permits all members of a JTIDS network, such as aircraft and ships, to accurately determine their position in both absolute and relative grid coordinates. Its operation is based on highly precise TOA measurement of signals received by cooperating units. The system includes a means for independent precise time synchronization of each unit with system time. The system includes two modes of operation, namely, one-way synchronous ranging and pseudoranging.

Position Location Reporting System (PLRS) (Ref. 1, Chapter 6) The Position Location Reporting System (PLRS) is a centralized, military, spread spectrum position location and navigation system for military aircraft, land vehicles, and personnel. It uses wide bandwidth phase coding, frequency hopping and time division multiple access and also incorporates data communications capability. It operates in the 420 to 450 MHz frequency band. It provides military commanders with information on the location of all of his elements, as well as own position and relative guidance information to each unit, in the absolute Military Grid Reference System (MGRS). Its operation is based on multilateration using highly precise TOA measurements of signals exchanged between units. Multiple relays are used to combat line-of-sight problems. The system includes two modes of operation, namely, two-way (round-trip) ranging and one-way synchronous ranging.

IFF To distinguish friend from foe, radars employ an interrogator-transponder system operating at a different set of frequencies from those of the basic radar. The “friend” is assumed to be transponders-equipped and the “foe” is not. The interrogator is pulsed at about the same time as the radar, and the transponder produces coded replies shortly after the direct radar reply from the aircraft skin. In theory, even if the foe used the same transponder equipment, he would not know the code of the day. This “identification of friend or foe” system

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.98

ELECTRONIC NAVIGATION SYSTEMS 23.98

NAVIGATION AND DETECTION SYSTEMS

became known as IFF. Interrogation take place at 1030 MHz and replies at 1090 MHz. Typical pulse powers are 500 W, with a 1-ms length for interrogation and 0.5-ms length for reply.

SSR (Ref. 1, Chapter 14) Secondary surveillance radar (SSR) is an outgrowth of the military identification of friend or foe (IFF) system, using the same frequencies. It is the principal means by which air traffic controllers identify and track civil and military aircraft worldwide. Secondary surveillance radars are the primary components of the FAA’s Air Traffic Control Radar Beacon System (ATCRBS). The secondary surveillance radar (beacon) units are frequently co-located with, and mounted on top of, conventional primary radars. The beacon ground stations interrogate airborne transponders at 1030 MHz and receive replies at 1090 MHz, in order to measure distance using two-way (round-trip) ranging and bearing to an aircraft using the SSR’s narrow antenna beam. Paired pulses are used for interrogation, and a third pulse between the two is radiated omnidirectionally to reduce triggering by side lobes. The airborne transponder replies only when the directional pulses are stronger than the omnidirectional pulse. The reply pulses comprise a train of up to 14 pulses, lasting 21 ms, which are currently combined into 4094 codes. These can be used to identify the aircraft or to communicate its altitude to the ground controller. The major problem of SSRs is the interference (garbling) which occurs when two or more aircraft are at about the same azimuth and distance from the interrogator. To alleviate this effect, the U.S. FAA has developed a new mode of interrogation coding (Mode S) which will allow each aircraft to be addressed by a discrete code, and thus only “speak when spoken to.” This system will be compatible with the present SSRs, to allow an orderly transition. In the current system, the replies are pulse-coded with identity (mode A) and altitude (mode C). Mode S provides higher angular accuracy through monopulse techniques, discrete addressing of each aircraft, more efficient code modulation and a much higher capacity date link capability.

ATCRBS The ATCRBS is the ICAO standard ground-based civil aircraft surveillance system based on the use of primary radars and secondary radars (SSRs) as sensors, as well as on extensive data processing, display, and communication systems. It is used by the air traffic control authorities to track aircraft.

TCAS The Traffic Alert and Collision Avoidance System (TCAS) is based on the use of SSR technology. In operation, a TCAS-equipped aircraft interrogates all aircraft in the vicinity; the interrogated aircraft responding by means of the SSR transponder and the response is received by the interrogating aircraft. The latter can thus determine the relative altitudes and positions of the two aircraft. Computation of the relative rate of change (closing velocity) of these aircraft provides an indication of whether the two aircraft are on a collision threat. Thus, unmodified SSR transponders are used at the standard SSR frequencies (1030 and 1090 MHz). Three versions of TCAS have been or are under development, i.e., TCAS I, II, and III. TCAS I is intended for general aviation. It tracks proximate aircraft in range and altitude and displays traffic advisories (TAs) for intruders, thereby aiding the flight crews in visually acquiring threat aircraft. The most widely implemented version is TCAS II, currently used by the commercial airlines. In this version, not only TAs are displayed, but, if a threat is calculated, a resolution advisory (RA) is displayed showing a vertical escape maneuver (e.g., “climb”) for avoiding collision. The two TCAS aircraft must exchange escape maneuver intentions in order to assure that they are complementary. The SSR Mode S data link can be used for that purpose. The U.S. Congress has passed a law making TCAS II mandatory on most airline type aircraft. In 1996, improved versions of TCAS were under development, notably those directed at a capability for horizontal-twin escape maneuvers. Also, techniques for the broadcast by each aircraft of on-board derived position via data link is under investigation, called ADS-B and TCAS IV.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.99

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEM

23.99

Transit (Ref. 6) This satellite-based system was originally developed for use by the U.S. Navy. However, after its release for civil use, the marine industry and also land mobile users saw its advantage of worldwide service and high accuracy. Each satellite operates in a polar orbit at 600-mi altitude and radiates two CW frequencies near 150 and 400 MHz. As it passes over an observer on the surface of the earth, these frequencies undergo a Doppler shift. The user receiver records the Doppler frequency history of the signal. Since the satellite orbit is accurately known as a function of time, the time of zero Doppler shift and the slope at zero Doppler are sufficient to determine the user’s position on the earth. The satellite positions are continually determined by a ground based tracking network and are broadcast by the satellites to the user. The user computes its position from the Doppler history. Six to seven satellites in polar orbit are used to provide global coverage. While a single radiated frequency would lead to a position accuracy of about 500 m 2 DRMS, two frequencies allow errors due to ionospheric propagation effects to be greatly reduced and the resulting accuracy with a two-frequency receiver is 25 m 2DRMS. Most receivers use two transmitted frequencies. The system is not suitable for aircraft because of its low update rate. For six satellites in orbit, the satellites pass a given point, on the average, 90 min apart. The system is in operation in 1996, but the 1992 U.S. Federal Radio Navigation Plan (Ref. 9) lists Transit as a candidate for phase-out in the late 1990s, based on the operational status of GPS.

Global Positioning System (GPS) (Ref. 1, Chapter 5) This statellite-based radio system was developed by the U.S. Air Force to provide worldwide coverage, highaccuracy three-dimensional position, velocity and time, and permitting completely passive (receive-only) operation by all types of military users. The system has now found wide acceptance by both military and civil users, e.g., military aircraft, ships, land vehicles, and foot soldiers, and a large variety of civil users, such as commercial and general aviation aircraft, commercial ships and pleasure boats, automobiles and trucks, and operators of surveying systems. Two services are available: the precise positioning service (PPS) for authorized (military) users provides horizontal position accuracy of 21 m 2 DRMS, a two-sigma vertical accuracy of 29 m and a one-sigma time accuracy of 100 ns; the standard positioning service (SPS) for all other users provides a two-DRMS, 95 percent probability, horizontal position accuracy of 100 m, a two-sigma vertical accuracy of 140 m, and a one-sigma time accuracy of 170 ns. The orbital configuration of the satellites is designed to provide a GDOP of normally near 2.3, and is intended to always be lower than 6. The GPS consists of 24 satellites, including three operational spares. The satellites are in 12-h orbits with four satellites in six orbit planes inclined at 55°, all at an orbital altitude of 10,900 n.mi. The satellites transmit highly synchronized, pseudonoise coded, wide bandwidth signals, which include data on their ephemerides and clock errors. A ground-based master control station (MCS) and five monitor stations track the satellites and periodically determine satellite ephemerides and clock errors and uplink these to the satellites via three uplink stations. The overall system configuration of GPS is shown in Fig. 23.3.7. The system operates on two frequencies, i.e., 1575.42 MHz (L1) and 1227.5 MHz (L2), to permit compensation for ionospheric propagation delays. The satellites transmit two codes, the 1.023 Mbps C/A code and the 10.23 Mbps P code. The latter code is encrypted into a Y code for military users. Each satellite has a unique code. System data (e.g., satellite ephemeris) is modulo2 added to both codes at 50 bps. GPS is a pseudoranging system (see Fig. 23.3.3). The user receiver determines at least four pseudoranges by TOA measurements with respect to its own clock time, and can also determine four pseudorange rates or delta pseudoranges via Doppler measurements with respect to its own clock frequency. From these measurements, the user receiver computes its own three-dimensional position coordinates and its clock time offset, as well as (in some receivers) its three-dimensional velocity coordinates and its clock frequency offset. The basic functional block diagram of a generic GPS receiver is shown in Fig. 23.3.8. GPS receiver processing includes both code and carrier tracking functions, which aid each other in two ways. The carrier (Doppler) tracking function sends Doppler velocity estimates to the code tracking information so that the resulting tracking loop bandwidth can be made very narrow. The code tracking function sends the prompt (on-time versus the early or late) estimate of the tracked code to the carrier tracking function, so that the code can be properly removed to allow tracking of the Doppler frequency.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.100

ELECTRONIC NAVIGATION SYSTEMS 23.100

NAVIGATION AND DETECTION SYSTEMS

FIGURE 23.3.7 GPS system configuration.

Since the civil aviation community requires a very high integrity, i.e., warning of a failure, a number of system augmentations have been under development for this purpose. One is the receiver autonomous integrity monitoring (RAIM) function within the user receiver, which automatically detects and isolates a satellite failure. The other is the ground integrity broadcast (GIB) which consists of a network of ground-based monitoring stations, which monitor all satellite signals, and transmit certain error data to a central computer which computes integrity data and sends these to a satellite earth station. The latter then transmits appropriate integrity information via geostationary satellites to all GPS user receivers which are equipped to receive these signals. In the United States, this system has been named Wide Area Augmentation System (WAAS) and includes additional functions. Some applications, for example aircraft landing and airport surveillance, require higher accuracies than those available from the basic GPS service. For these, the differential GPS (DGPS) concept has been implemented (Fig. 23.3.9). This concept employs a GPS reference station, whose position is precisely known. The reference station compares the predicted pseudoranges (from its known position) to the actually measured pseudoranges, computes the differences (corrections) for all satellites in its view and broadcasts these over a separate data link to the vehicles in the general vicinity, say at a 100 n.mi (160 km) radius. The user receiver then applies these corrections to its own pseudorange measurements for computation of its position. By means of this process, ionospheric and tropospheric propagation and satellite position errors common to the

FIGURE 23.3.8 Generic GPS receiver block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.101

ELECTRONIC NAVIGATION SYSTEMS ELECTRONIC NAVIGATION SYSTEM

23.101

FIGURE 23.3.9 Differential GPS concept.

user and the reference station can be eliminated. With the DGPS technique, user position accuracies of 2 to 5 m can be obtained. A further refinement of this technique is the use of pseudosatellites ( pseudolites) located on the ground and transmitting GPS-like signals to the user. This technique tends to reduce the GDOP of the system, notably in the vertical dimension. For certain very high accuracy performance applications, such as surveying and category II/III landing, the carrier phase of the GPS signal, rather than only the code phase (pseudorange), is measured to determine user position. In order to do this successfully, the carrier cycle ambiguity (of 19 cm) must be resolved. Using GPS carrier phase measurements on a postprocessing basis, centimeter accuracies have been obtained for surveying applications. In the late 1990s GPS is likely to be in widespread use throughout the world by a very large variety of users. A U.S. Department of Commerce report predicts 1 million commercial users by the year 2000.

Global Navigation Orbiting Satellite System (GLONASS) (Ref. 1, Chapter 5) The GLONASS is a worldwide satellite radio navigation system being developed by the Russian government, whose concept is very similar to that of the U.S. GPS. It uses 24 satellites in 25,500-km orbits, with eight satellites in three orbital planes, at an inclination of 64.8° and an 8-day ground track repeat. This inclination is somewhat higher than that of GPS, thus providing somewhat better polar coverage. Each GLONASS satellite operates on two codes, one wide band and the other narrow band, and in two frequency bands. Unlike GPS, all satellites use the same two codes, but each satellite operates on slightly different frequencies. The higher frequency band currently goes from 1602 to 1615.5 MHz. The lower frequency band goes from 1246.4 to 1256.5 MHz. The higher GLONASS frequency assignments are likely to be changed in the future, in order to avoid interference with certain communication satellite frequencies. Also, it is currently planned in the future to use the same frequency on satellites in antipodal positions (i.e., halfway around the globe) in order to conserve the frequency spectrum. The two GLONASS code clock frequencies are 0.5 and 5.0 MHz. The accuracy performance of the standard (nonmilitary) GLONASS service is a horizontal position accuracy of 100 m, a vertical accuracy of 150 m, a velocity accuracy of 15 cm/s, and a time accuracy of 1 ms. Integrated receivers are under development, which can process both GPS and GLONASS signals, thereby providing the use of a total of 48 satellites, increasing system availability.

SUMMARY OF ELECTRONIC NAVIGATION SYSTEM CHARACTERISTICS Table 23.3.1 presents a summary of the technical and performance characteristics of the major electronic navigation and position location systems currently in operational use worldwide. In the case of the accuracy values,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_23.qxd

10/28/04

11:22 AM

Page 23.102

ELECTRONIC NAVIGATION SYSTEMS 23.102

NAVIGATION AND DETECTION SYSTEMS

TABLE 23.3.1 Summary of Electronic Navigation System Characteristics

System

Typea

Omega/VLF Loran C Decca Beacons VOR DME Tacan ILS

H H H R R C R, C R

MLS

R

JTIDS-RelNav PLRS SSR/IFF

C, PR C R, C

TCAS Transit GPS GLONASS

R, C H PR PR

Altimeter Mapping radar Map matching Doppler radar Inertial Celestial

S S S S S S

Frequency (MHz)

Accuracy (absolute)

0.010–0.014 0.100 0.070–0.130 0.200–1.6 108–118 960–1215 960–1215 108–112, 329–335 5000–5250 5000–5250 960–1220 (DME) 960–1215 420–450 1030, 1090 1030, 1090 1030, 1090 150, 400 1575, 1227 1602–1616c 1246–1257e 4200–4400 various various 13,325

3.7–7.4 km 460 m > 1 a>b a2 r> l ka >> 1 ka, kb, kc >> 1 r >> a, b, c r>

L2 l kl >> 1 r>

ka >> 1 r > L2/l

ka > 1 r>a

ka >> 1 r>a ka > 1

ka1, ka2 >> 1 r>a

11:23 AM

Thin

Cylinder Infinitely long Thick

(61.7)

a2 4

a1a2 4

t Target strength = 10 log t

10/28/04

Small

Sphere Large

Any convex surface

Form

TABLE 23.4.6 Target Strength of Simple Forms

Christiansen_Sec_23.qxd Page 23.132

UNDERWATER SOUND SYSTEMS

Source: Urick, Ref. 3, Par. 9.5.

L4 (1 − 0.00076θ 2 ) 3λ 2 L = length of edge of reflector At angle q to axis of symmetry

Average overall directions

S = total surface area of object

S 16π

At angle q with axis of cone

Average overall directions

y = half angle of cone

Direction of incidence

a = radius of disk

−3

Symbols

a2 8

  λ  sin 2 θ  4  8π  tan ψ  1 − cos2 ψ   

2

t Target strength = 10 log t Conditions

(2a) 2 λ All dimensions and radii of curvature large compared with l Dimensions large compared with l

r>

ka >> 1

q 1000 >1000 175 Ta + 25 Ta −200 to +1000

P D P P P P P D/F D D D

may lead to humidity sensing being part of future engine control systems and passenger compartment comfort control systems.

Temperature Sensors Several different sensing techniques are used in production vehicles and during vehicle development to provide the needed temperature measurements. Table 24.1.8 shows a list of these techniques and the temperature range that is typical for each approach. Table 24.1.9 shows common types of thermocouples and their operating characteristics.

Humidity Sensors Techniques to measure humidity are listed in Table 24.1.10. Most of these are actually laboratory instruments. Three sensing techniques that have potential for future vehicle use are capacitive, resistive, and oxidized porous silicon.

TABLE 24.1.9 Common Thermocouples and Application Factors Conductor ISA code E* J* T* K* N* S* R*

Positive Chromel Iron Copper Chromel Nicrosil Platinum† Platinum‡

Negative

Temperature range, °C

Constantan Constantan Constantan Alumel Nisil Platinum Platinum

0 to +316 0 to +277 –59 to +93 0 to +277 0 to +277 0 to +538 0 to +538

Standard error limit, °C ±2 ±2 ±1 ±2 ±2 ±3 ±3

Seebeck coefficient, mV/°C 62 51 40 40 38 7 7

*Other

temperature ranges and error limits are available. percent Rhodium. ‡13 percent Rhodium. †10

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.18

AUTOMOTIVE SENSORS AND ACTUATORS 24.18

AUTOMOTIVE ELECTRONICS

TABLE 24.1.10

Techniques for Measuring Humidity/Moisture

Principle

Type measurement

Gravimetric hygrometer Pressure-humidity generator Wet bulb/dry bulb (psychrometer) Hair element Electric conductivity Dew cell Chilled mirror Karl Fisher titration Electrolytic Lithium chloride Capacitance hygropolymer Bulk polymer (resistance) Thin-film polymer (capacitance) Gold/aluminum oxide Oxidized porous silicon (OPS)

Instrument Instrument Instrument Instrument Instrument Instrument Instrument Instrument Instrument Instrument Production sensor Production sensor Production sensor Production sensor Experimental sensor

EXHAUST GAS SENSORS* From the model year 1994 onward, on-board devices capable of monitoring the operation of all emissionrelevant vehicle components have been required in the United States. Pollutant emission sensors to measure levels of CO, NOx, and HC emissions downstream from the catalytic converter represent an ideal means of monitoring the performance of both converter and oxygen sensor. There also exist concepts that employ a second oxygen sensor behind the catalytic converter to detect aging in the converter and/or the lambda sensor. This concept is currently used in almost all On-Board Diagnosis II applications worldwide.

Lambda Sensors Lambda = 1 Sensor: Nernst Type (ZrO2). The lambda sensor operates as a solid-electrolyte galvanic oxygenconcentration cell (Fig. 24.1.15). A ceramic element consisting of zirconium oxide and yttrium oxide is employed as a gas-impermeable solid electrolyte designed to separate the exhaust gas from the reference atmosphere. Lambda = 1 Sensor: Semiconductor Type. Oxidic semiconductors such as TiO2 and Sr TiO3 rapidly achieve equilibrium with the oxygen partial pressure in the surrounding gas phase at relatively low temperatures. Changes in the partial pressure of the adjacent oxygen produce variations in the oxygen vacancy concentration of the material, thereby modifying its volume conductivity. Lean A/F Sensor: Nernst Type. It is always possible to employ a potentiometric lambda = 1 sensor as a lean A/F sensor by using the flat part of the Nerst voltage curve to derive the values at lambda >1. Lean A/F Sensor: Limiting Current Type. An external electrical voltage is applied to two electrodes on a heated ZrO2 electrolyte to pump O2 ions from the cathode to the anode. When a diffusion barrier impedes the flow of O2 molecules from the exhaust gas to the cathode, the result is current saturation beyond a certain *Weidenmann, H. M., Hotzel, G., Neumann, H., Riegel, J., Stanglmeier, F., and Weyl, H., “Exhaust Gas Sensors,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 6.1–6.25.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.19

AUTOMOTIVE SENSORS AND ACTUATORS AUTOMOTIVE SENSORS AND ACTUATORS

24.19

FIGURE 24.1.15 Diagram illustrating design and operation of a lambda = 1 sensor.

pumping voltage threshold. The resulting limiting current is roughly proportional to the exhaust gas’s oxygen concentration. Wide-Range A/F Sensor: Single-Cell. When the anode of a limiting current sensor is exposed to reference air instead of to the exhaust gas, the total voltage at the probe will be the sum of the effective pumping voltage and a superimposed Nernst voltage. In operation, holding the total voltage to, as an example, 500 mV will produce a positive pumping voltage in lean exhaust gases. The diffusion limits the rate at which O2 is pumped from the cathode to the anode. At lambda = 1, the pumping voltage and, with it, the pumping current drop toward zero. Wide-Range A/F Sensor: Dual-Cell. Skillful combination of a limiting-current sensor with a Nernst concentration cell on a single substrate will produce a dual-cell, wide-range A/F sensor. An electronic circuit regulates the current applied to the pumping cell to maintain a constant gas composition in the measurement gap. If the exhaust gas is lean, the pumping cell drives the oxygen outward from the measurement gap. If the exhaust gas is rich, the flow direction is reversed and oxygen from the surrounding exhaust gas is pumped into the measurement gap.

Other Exhaust Gas Components Sensors capable of monitoring the levels of the regulated toxic exhaust substances—CO, NOx, and HC—would be desirable as elements of the On-Board Diagnosis Systems specified by the California Air Resources Board and EURO III/IV, especially for Gasoline Direct Injection systems. For the more stringent emissions standards, the need for selective exhaust gas sensors have to be developed. Mixed-Potential Sensors. If reduced catalytic activity prevents gas equilibrium from being achieved at the electrode of a galvanic ZrO2 cell, competing reactions can occur. These, in turn, prevent a state of reduction/oxidation equilibrium in the oxygen, and lead to the formation of a mixed potential. Mixed-potential exhaust gas sensors are available for NOx, HC, CO, and NH3 to study new diagnostic concepts.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.20

AUTOMOTIVE SENSORS AND ACTUATORS 24.20

AUTOMOTIVE ELECTRONICS

Dual Pumping Cell Gas Sensors. These sensors are based on a zirconia planar type similar to the wide-range A/F sensor and contain two different chambers with gas-selective electrodes. The exhaust gas penetrates into the first chamber through the first diffusion barrier. By a selective O2 electrode, oxygen is removed from the exhaust gas. In the second chamber, NOx is decomposed into N2 and O2 and the O2 is pumped out of the chamber. The associated pumping current is a measure of the NOx concentration in the exhaust gas. Semiconductor Gas Sensors. On the surface of nonstoichiometric metal oxides such as SnO2, TiO2, InO3, and Fe2O3, oxygen is absorbed and dissociated in air at high temperatures and is bonded to the crystal lattice. This leads to the formation of a thin depletion layer at the crystallite surface with a resultant arch in the potential curve. This phenomenon produces reductions in surface conductivity and higher intercrystalline resistance at the boundaries between the two crystallites; this is the major factor determining the total resistance of the polycrystalline metal oxide. Oxidation gases such as CO, H2, and CxHy, which react with surface oxygen, increase the density of charge carriers in the boundary layer and reduce the potential barrier. Reduction gases such as NOx and SOx raise the potential barrier and, with it, the surface/intercrystalline resistance. Catalytic Gas Sensors. The catalytic gas sensor is essentially a temperature sensor featuring a catalytically active surface. An exothermic reaction at the catalytically active surface causes the temperature of the sensor to rise. The increase in temperature is proportional to the concentration of an oxidation gas in an excess-oxygen atmosphere.

SPEED AND ACCELERATION SENSORS* Speed sensing can be divided into rotational and linear applications. Rotation speed sensing has two major application areas: engine speed monitoring to enhance engine control and performance and antilock braking and traction control systems for improved road handling and safety. Linear sensing can be used for groundspeed monitoring for vehicle control, obstacle detection, and crash avoidance. Acceleration sensors are used in air bag deployment, ride control, antilock brake, traction, and inertial navigation systems.

Speed-Sensing Devices Variable Reluctance Devices. These devices are essentially small ac generators with an output voltage proportional to speed, but they are limited in applications where zero speed sensing is required. These devices have a linear output voltage with frequency and need an A/D converter to generate a digital signal for compatibility with the master control unit. Hall-Effect Devices. A Hall-effect device can be used for zero speed sensing (it can give an output when there is no rotation). Hall devices give a frequency output that is proportional to speed, making them compatible with microcontrollers. Magnetoresistive Devices. The magnetoresistive effect is the property of a current-carrying ferromagnetic material to change its resistivity in the presence of an external magnetic field. The resistivity value rises to a maximum when the direction of current and magnetic field are coincident, and is a minimum when the fields are perpendicular to each other. These devices give an output frequency that is proportional to speed, making for ease of interfacing with an MCU. Ultrasonic Devices. Ultrasonic devices can be used to measure distance and ground speed. They can also be used as proximity detectors. To give direction and beam shape, the signals are transmitted and received via specially

*Dunn, W. C., “Speed and Acceleration Sensors,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 7.1–7.29.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.21

AUTOMOTIVE SENSORS AND ACTUATORS AUTOMOTIVE SENSORS AND ACTUATORS

24.21

configured horns and orifices. To measure speed, the distance variation with time can be measured and the velocity calculated. A more common method is to use the Doppler effect, which is a change in the transmitted frequency as detected by the receiver due to motion of the target (or, in this case, the motion of the transmitter and receiver). Optical and Radio Frequency Devices. Optical devices are still being used for rotational speed sensing. They normally consist of light-emitting diodes (LEDs) with optical sensors. An optical sensor detects light from an LED through a series of slits cut into the rotating disc, so that the output from the sensor is a pulse train whose frequency is equal to the rpm of the disc multiplied by the number of slits. Radio frequency devices use gallium arsenide on Gunn devices to obtain the power and high frequency (about 100 GHz) required in the transmitter.

Automotive Applications for Speed Sensing There are several applications for speed sensing. First it is necessary to monitor engine speed. This information is used for transmission control, engine control, cruise control, and possibly for a tachometer. Wheel speed sensing is required for use in transmissions, cruise control, speedometers, antilock brake systems, traction control, variable ratio power steering assist, four-wheel steering, and possibly in inertial navigation and air bag deployment applications. Linear speed sensing is used to measure the ground speed and could also be used in antilock brake systems, traction control, and inertial navigation.

Acceleration Sensing Devices Acceleration sensors vary widely in their construction and operation. In applications such as crash sensors for air bag deployment, mechanical devices (simple mechanical switches) have been developed and are in use. Solid-state analog accelerometers have also been designed for air bag applications. Mechanical Sensing Devices. Mechanical switches are simple make-break devices. Figure 24.1.16 shows the cross section of a Breed-type of switch or sensor. Piezoelectric Sensing Devices. Piezoelectric devices (Fig. 24.1.17) consist of a layer of piezoelectric material (such as quartz) sandwiched between a mounting plate and a seismic mass. Electric connections are made to both sides of the piezoelectric material. Piezoresistive Sensing Devices. The property of some materials to change their resistivity when exposed to stress is called the piezoresistive effect. Piezoresistive sensing can be used with bulk micromachined accelerometers (Fig. 24.1.18).

FIGURE 24.1.16 Cross section of a mechanical sensor.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.22

AUTOMOTIVE SENSORS AND ACTUATORS 24.22

AUTOMOTIVE ELECTRONICS

FIGURE 24.1.17 Cross section of a piezoelectric accelerometer.

Capacitive Sensing Devices. Differential capacitive sensing has a number of attractive features when compared to other methods of sensing: easily implement self-test, temperature insensitivity, and smaller size. Capacitive sensing has the advantages of dc and low-frequency operation and well-controlled damping. Open-loop signal-conditioning circuits amplify and convert the capacitance changes into a voltage. Such a CMOS circuit using switched capacitor techniques is shown in Fig. 24.1.19. A closed-loop system (Fig. 24.1.20) can be configured to give an analog or digital output.

Automotive Applications for Accelerometers Accelerometers have a wide variety of uses in the automobile. The initial application is as a crash sensor for air bag deployment. An extension of this application is the use of accelerometers for the detection of side

FIGURE 24.1.18 Bulk micromachined accelerometer.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.23

AUTOMOTIVE SENSORS AND ACTUATORS AUTOMOTIVE SENSORS AND ACTUATORS

24.23

FIGURE 24.1.19 Capacitive sensing integrator circuit.

impact or rollover. Low g sensors are being developed for ride control, antilock braking, traction, and inertial navigation requirements.

New Sensing Devices New cost-effective sensors are continually being developed. For rotational speed sensing, a number of new devices are being investigated to detect magnetic fields: flux gate, Weigand effect, magnetic transistor, and magnetic diode. For linear speed-sensing, ultrasonics, infrared, laser, and microwaves (radar) can be used in the detection of objects behind vehicles and in the blind areas. Recent developments in solid-state technology

FIGURE 24.1.20 Block diagram of analog closed-loop control.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.24

AUTOMOTIVE SENSORS AND ACTUATORS 24.24

AUTOMOTIVE ELECTRONICS

FIGURE 24.1.21 Solid-state gyroscope.

have made possible very small cost-effective devices to sense angular rotation. The implementation of one such gyroscopic device is shown in Fig. 24.1.21.

ENGINE KNOCK SENSORS* Knock is a phenomenon characterized by undesirable structural vibration and noise generation and is peculiar to spark-ignited engines. The terms ping (light, barely observable knock) and predetonation (knock caused by ignition of the charge slightly before the full ignition of the flame front by the spark plug) are also commonly used in the industry. An attempt to measure the cause of the phenomenon leads one to the difficult problem of observing pressure waves in the cylinder. In fact, over the years these difficulties led the industry to devise an experimental comparison technique that measured the octane rating of the fuel, not of the engine. Technologies for Sensing Knock A number of different technologies have been selected for measuring knock in real time on board, and sensors have been developed for this purpose. These sensors measure the magnitude of a consequential parameter

*Wolber, W. G., “Engine Knock Sensors,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 8.1–8.10.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.25

AUTOMOTIVE SENSORS AND ACTUATORS AUTOMOTIVE SENSORS AND ACTUATORS

24.25

FIGURE 24.1.22 An exploded view of a jerk sensor.

driven by the knock, rather than the knock phenomenon itself. The overall effectiveness of the control is determined not only by the intrinsic performance and stability of the sensor and control, but also by how robust the chain of causality is between the knock phenomenon and the parameter measured. Knock can be controlled either by retarding spark timing or by opening a manifold boost wastegate valve. Jerk Sensor. The first turbocharged engine knock control used a jerk sensor, a productionized version of the kind of sensor used in the laboratory (Fig. 24.1.22). When the sensor is fully assembled, the spider spring is preloaded, and all parts of the sensor except the coil cover are in compression. The vibrations picked up and transmitted from the engine block through the mounting study appear in the nickel alloy rods. The waves present in the rods linearly modulate the magnetic reluctance of the magnetic circuit. The many-turn coil wound around the magnetostrictive rods generates a voltage proportional to the rate of change of the magnetic reluctance of the rods. Since the vibrations picked up are already due to accelerations from the knock reverberations transmitted though the engine block, the voltage from the coil represents the third-time derivative of displacement, or jerk. Accelerometer Sensors. The jerk sensor has too many parts to be a low-cost solution to measuring knock on board. A more economical approach is to use the second-time derivative of displacement—acceleration. Piezoelectric, piezoceramic, and silicon accelerometers can be used. Other Sensors. An instantaneous cylinder pressure sensor permits the extraction of the pressure reverberation signal, which is the direct cause of knock, but it has not been implemented in on-board knock control systems for several reasons:

• Either the same cylinder must always be the one to experience knock first and most severely or one must have a sensor on every cylinder. • The cylinder pressure wave is complex, and knock is only one of many signature elements present. Deriving a unique knock signal requires considerable signal processing in real time. • While cylinder pressure sensors exist, which are suitable for test purposes, a durable, low-cost, mass-producible on-board pressure sensor is not yet available. Piezoceramic Accelerometers. These accelerometers (Fig. 24.1.23) lend themselves to low-cost mass production. As a result they have become pretty much the knock sensor of choice for the automobile industry.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.26

AUTOMOTIVE SENSORS AND ACTUATORS 24.26

AUTOMOTIVE ELECTRONICS

FIGURE 24.1.23 A piezoceramic accelerometer knock sensor.

ENGINE TORQUE SENSORS* Torque can be defined as the moment produced by the engine crankshaft tending to cause the output driveline to turn and thereby deliver power to the load.

Direct Torque Sensors Magnetic Vector Sensors. A strain-gaged torsional Hooke’s law sensor has been used in off-board torque measurements. A more practical approach to an on-board torque sensor is a noncontacting design called a magnetic vector sensor. It operates on the principle that the magnetic domains in a ferromagnetic shaft delivering torque are randomly distributed when no torque is being delivered, but that each domain, on average, is slightly rotated in the tangential direction when torque is delivered by the shaft and twists it. If an ac-driven coil is placed near the shaft and surrounded by four sensing coils arranged mechanically and electrically as a bridge, the amplitude of the bridge offset is proportional to the magnetic vector tangential component, and therefore to twist and torque. Optical Twist Measurement. Research has been reported on a sensor that changes the duty factor of light pulses sensed after passing through sets of slits at both ends of a shaft. The principle of its operation is shown in Fig. 24.1.24.

*Wolber, W. G., “Engine Torque Sensors,” in Automotive Electronic Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 9.1–9.14.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.27

AUTOMOTIVE SENSORS AND ACTUATORS AUTOMOTIVE SENSORS AND ACTUATORS

24.27

FIGURE 24.1.24 Optical torque meter. (Courtesy of The Bendix Corp.)

Capacitive Twist Sensor. An electrode pattern can be made using interdigitated electrodes spaced one or two degrees apart on two facing discs. One of the discs is stationary; the other rotates with the crankshaft. Two such pairs of electrodes can be operated with phase detection measurement to provide a virtually instantaneous signal proportional to the twist of the shaft. The rotating halves of the electrode pairs are attached to the ends of the Hooke’s law torsional spring. Inferred Torque Measurement Instantaneous Cylinder Pressure Sensors. Much work continues on development of a mass-producible onboard cylinder pressure sensor. The signals from cylinder pressure sensors need considerable real-time data processing to produce inferred “torque” signals. Digital Period Analysis (DPA). When an engine is run at low speed and heavy load, the instantaneous angular velocity of its output shaft on the engine side of the flywheel varies at the fundamental frequency of the cylinders, since the compression stroke of each cylinder abstracts torque and the power stroke adds a larger amount. The signal-to-noise ratio of the measurement of instantaneous angular velocity (or rather of its reciprocal, instantaneous period) degrades with increasing engine speed and lighter load, but is a useful way to infer torque-like measures of engine performance.

ACTUATORS* Actuators in automobiles respond to position commands from the electronic control unit to regulate energy, mass, and volume flows. Basic actuator elements are shown in Fig. 24.1.25. Either the control unit or the actuator itself will feature an integral electronic output amplifier.

FIGURE 24.1.25 Basic actuator elements.

*M¨ uller,

K., “Actuators,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 10.1–10.35.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.28

AUTOMOTIVE SENSORS AND ACTUATORS 24.28

AUTOMOTIVE ELECTRONICS

FIGURE 24.1.26 Flat-armature solenoid featuring field excitation (a) via coil, (b) via permanent magnet.

Types of Electromechanical Actuators Magnetic Actuators. In order to operate, these actuators depend on the forces found at the interfaces in a coil-generated magnetic field when current passes through it. With a flat armature solenoid (Fig. 24.1.26), a particular solenoid force is specified for each technical application. The pole face area, the magnetic circuit, and the coil are then determined for this force. Torque Motors. The torque motor consists of a stator and an armature—both made of soft magnetic material— and a permanent magnet. The pivoting armature can be equipped with either one or two coils. Electrostatic Step Motors. Electrostatic step motors are drive elements in which a special design operates in conjunction with pulse-shaped control signals to carry out rotary or linear-stepped movements. The step motor is capable of transforming digital control signals directly into discontinuous rotary motion. In principle, the step motor is essentially a combination of dc solenoids. Depending on the configuration of the magnetic circuit, a distinction is made between three types of step motors: the variable-reluctance step motor (neutral magnetic circuit), heteropolar units (polarized magnetic circuit), and hybrid devices. Moving Coils. The moving coil is an electrodynamic device (force is applied to a current-saturated conductor in a magnetic field). A spring-mounted coil is located in the ring gap of a magnetic circuit featuring a permanent magnet. When current flows through the coil, a force is exerted against it. The direction of this force is determined by the flow direction of the current itself.

Electrical Actuators Piezoelectric Actuators. When mechanical compression and tension are brought to bear on a piezoelectric body, they produce an asymmetrical displacement in the crystal structure and in the charge centers of the affected crystal ions. The result is charge separation. An electric voltage proportional to the mechanical pressure can be measured at the metallic electrodes. If electric voltage is applied to the electrodes on this same body, it will respond with a change in shape; the volume remains constant. This piezoelectric effect can be exploited to produce actuators. Electrostatic Actuators. Microactuator technology makes it possible to apply small electrical field forces in mechanical drive devices. They combine high switching speeds with much smaller energy loss than that found in electromagnetic actuators. The disadvantages are the force-travel limitations and the high operating voltages.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.29

AUTOMOTIVE SENSORS AND ACTUATORS AUTOMOTIVE SENSORS AND ACTUATORS

24.29

Electrorheological Fluids. The electrorheological effect is based on polarization processes in minute particles suspended in a fluid medium. These particles modify the fluid’s viscosity according to the orientation in the electrical field. This effect is exploited in controlled transfer and damping elements.

Thermal Actuators Temperature-Sensitive Bimetallic Elements. The temperature-sensitive bimetallic element is composed of at least two bonded components of varying thermal-expansion coefficients. When heat is applied, the components expand at different rates, causing the bimetallic element to bend. When electrically generated heat is applied, these devices become actuators. Memory Alloys. Memory alloys are metallic materials that exhibit a substantial degree of “shape memory.” If the element is heated to beyond the transformation temperature (from martenistic to austenitic), it returns to its former shape. If the component is then reshaped after cooling, the entire process can be repeated.

Automotive Actuators Actuators for Braking Intervention. Braking pressure is regulated normally via 2/2 solenoid valves, that is, valves with two ports and two switch positions. When no current is applied, the inlet valve remains open and the outlet valve is closed, allowing unrestricted wheel braking. Electronic Throttle Control (ETC) with Throttle-Aperture Adjustment. Either of two standard methods can be employed for throttle regulation. ETC systems feature an actuator (servo motor) mounted directly at the throttle valve. On systems incorporating a traction-control actuator, the servo motor is installed in the throttle cable. Another approach to engine intervention is embodied in a design in which the linkage spring is integrated within the throttle body. Fuel Injection for Spark-Ignition Engines. Electronically controlled fuel injection systems meter fuel with the assistance of electromagnetic injectors. The injector’s opening time determines the discharge of the correct amount of fuel. Fuel Injection for Diesel Engines. Distributor-type fuel injection pumps contain rotary solenoid actuators for injection quantity, and two position valves for engine operation and shutoff. Passenger-Safety Actuators. Pyrotechnical actuators are used for passenger-restraint systems such as the air bag and the automatic seal belt tensioner. When an accident occurs, the actuators inflate the air bag (or tension the seat belt) at precisely the right instant. Electronic Transmission Control. Continuous operation actuators are used to modulate pressure, while switching actuators function as supply and discharge valves for shift-point control. Types of actuators used included on/off solenoid valves, variable-pressure switches, and pulse-width-modulated solenoid valves. Headlight Vertical Aim Control. Headlight-adjustment actuators are dc or step motors producing a rotary motion that gear-drive units then convert to linear movement.

Future Actuators The motivation to develop new actuators is created by the potential advantages of new manufacturing and driving techniques in combination with new materials. Representative new fields of innovation include micromechanical valves and positive-engagement friction drive (ultrasonic motor).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.30

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 24.2

AUTOMOTIVE ON-BOARD SYSTEMS Ronald K. Jurgen

INTRODUCTION In the previous chapter in this section, the significance of sensors at the input of electronic automotive systems and actuators at the output of such systems was stressed. Equally important, of course, are the automotive microcontrollers that accept the sensor signals, process them, and then send command signals to the actuators. The combination of sensors, microcontrollers, and actuators is what makes possible the myriad of automotive control systems now in use in production cars. This chapter begins with a brief discussion of microcontrollers and then proceeds with descriptions of most major automotive control systems.

MICROCONTROLLERS* A microcontroller can be found at the heart of almost any automotive electronic control module or electronic control unit (ECU) in production today. Automotive systems such as antilock braking control (ABS), engine control, navigation, and vehicle dynamics all incorporate at least one microcontroller within their ECU to perform necessary control functions. A microcontroller can essentially be thought of as a single-chip computer system and is often referred to as a single-chip microcomputer. It detects and processes input signals, and responds by asserting output signals to the rest of the ECU. Fabricated on a highly integrated, single piece of silicon are all of the features necessary to perform embedded control functions. Microcontrollers are fabricated by many manufacturers and are offered in just about any imaginable mix of memory, input/output (I/O), and peripheral sets. The user customizes the operation of the microcontroller by programming it with his or her own unique program. The program configures the microcontroller to detect external events, manipulate the collected data, and respond with appropriate output. The user’s program is commonly referred to as code and typically resides on-chip in either read-only memory (ROM) or erasable programmable read-only memory (EPROM). In some cases where an excessive amount of code space is required, memory may exist off-chip on a separate piece of silicon. After power-up, a microcontroller executes the user’s code and performs the desired embedded control function.

*Boehmer, D. S., “Automotive Microcontrollers,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 11.3–11.55.

24.30 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.31

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.31

FIGURE 24.2.1 Microcontroller block diagram.

Microcontrollers differ from microprocessors in several ways. A microcontroller can be thought of as a complete microcomputer on a chip that integrates a central processing unit (CPU) with memory and various peripherals such as analog-to-digital (A/D) converters, serial communication units, high-speed input and output units, timer/counter units, and standard low-speed I/O ports. Microcontrollers are designed to be embedded within event-driven control applications and generally have all necessary peripherals integrated onto the same piece of silicon. Microprocessors, on the other hand, typically require external peripheral devices to perform their intended function and are not suited to be used in single-chip designs. Microprocessors basically consist of a CPU with register arrays and interrupt handlers. Peripherals such as A/D are rarely integrated onto microprocessor silicon. Microprocessors are designed to process large quantities of data and have the capability to handle large amounts of external memory. Choosing a microcontroller for an application is a process that takes careful investigation and thought. Items such as memory size, frequency, bus size, I/O requirements, and temperature range are all basic requirements that must be considered when choosing a microcontroller. The microcontroller family must possess the performance capability necessary to successfully accomplish the intended task. The family should also provide a memory, I/O, and frequency growth path that allows easy upgradability to meet market demands. Additionally, the microcontroller must meet the application’s thermal requirements in order to guarantee functionality over the intended operating temperature range. Items such as these must all be considered when choosing a microcontroller for an automotive application. A typical block diagram of a microcontroller is shown in Fig. 24.2.1.

ENGINE CONTROL* An electronic engine control system consists of sensing devices that continuously measure the operating conditions of the engine, an ECU that evaluates the sensor inputs using data tables and calculations and determines

*Hirschlieb, G. C., Schiller, G., and Stottler, S., “Engine Control,” in Automotive Electronics Handbook, 2nd ed., McGraw-Hill, 1999, pp. 12.1–12.36.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.32

AUTOMOTIVE ON-BOARD SYSTEMS 24.32

AUTOMOTIVE ELECTRONICS

the output to the actuating devices, and actuating devices that are commanded by the ECU to perform an action in response to the sensor inputs. The motive for using an electronic engine control system is to provide the needed accuracy and adaptability in order to minimize exhaust emissions and fuel consumption, provide optimum driveability for all operating conditions, minimize evaporative emissions, and provide system diagnosis when malfunctions occur. In order for the control system to meet these objectives, considerable development time is required for each engine and vehicle application. A substantial amount of development must occur with an engine installed on an engine dynamometer under controlled conditions. Information gathered is used to develop the ECU data tables. A considerable amount of development effort is also required with the engine installed in the vehicle. Final determination of the data tables occurs during vehicle testing.

Spark Ignition (SI) Engines Fuel Delivery Systems. Fuel management in the spark ignition system consists of metering the fuel, formation of the air/fuel mixture, transportation of the air/fuel mixture, and distribution of the air/fuel mixture. The driver operates the throttle valve, which determines the quantity of air inducted by the engine. The fuel delivery system must provide the proper quantity of fuel to create a combustible mixture in the engine cylinder. In general, two fuel delivery system configurations exist: single-point and multipoint (Fig. 24.2.2). For single-point systems such as carburetors or single-point fuel injection (Fig. 24.2.3), the fuel is metered in the vicinity of the throttle valve. Mixture formation occurs in the intake manifold. Some of the fuel droplets evaporate to form fuel vapor (desirable) while others condense to form a film on the intake manifold walls (undesirable). Mixture transport and distribution is a function of intake manifold design. Uniform distribution under all operating conditions is difficult to achieve in a single-point system. For multipoint systems, the fuel is injected near the intake valve. Mixture formation is supplemented by the evaporation of the fuel on the back of the hot intake valve. Mixture transport and distribution occurs only in

FIGURE 24.2.2 Air-fuel mixture preparation: right, single-point fuel injection; left, multipoint fuel injection with fuel (1), air (2), throttle valve (3), intake manifold (4), injector(s) (5), and engine (6).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.33

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.33

FIGURE 24.2.3 Single-point injection unit: pressure regulator (1), injector (2), fuel return (3), stepper motor for idle speed control (4), to intake manifold (5), throttle valve (6), and fuel inlet (7).

the vicinity of the intake valve. The influence of the intake manifold design on uniform mixture distribution is minimized. Since mixture transport and distribution is not an issue, the intake manifold design can be optimized for air flow. Ignition Systems. The general configuration of an ignition system consists of the following components: energy storage device, ignition timing mechanism, ignition triggering mechanism, spark distribution system, and spark plugs and high-tension wires. Table 24.2.1 summarizes the various ignition systems used on SI engines.

Compression Ignition Engines Electronic engine controls are now being used on compression ignition (diesel) engines. These controls offer greater precision and control of fuel injection quantity and timing, engine speed, exhaust gas recirculation (EGR), turbocharger boost pressure, and auxiliary starting devices. The following inputs are used to provide the ECU with information on current engine operating conditions: engine speed; accelerator position; engine coolant, fuel, and inlet air temperatures; turbocharger boost pressure; vehicle speed; control rack or control collar position

TABLE 24.2.1 Overview of Various Ignition Systems Ignition designation

Ignition function

Coil system

Transistorized coil system

Ignition triggering Ignition timing High-voltage generation Spark distribution to appropriate cylinder

Mechanical Mechanical Inductive Mechanical

Electronic Mechanical Inductive Mechanical

Capacitor discharge system

Electronic system with distributor

Electronic distributorless system

Electronic Electronic Capacitive Mechanical

Electronic Electronic Inductive Mechanical

Electronic Electronic Inductive Electronic

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.34

AUTOMOTIVE ON-BOARD SYSTEMS 24.34

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.4 Electronic engine control system for an in-line injection pump: control rack (1), actuator (2), camshaft (3), engine speed sensor (4), ECU (5), input/output: redundant fuel shutoff (a), boost pressure (b), vehicle speed (c), temperature—water, air, fuel (d), intervention in injection fuel quantity (e), speed (f), control rack position (g), solenoid position (h), fuel consumption and engine speed display (i), system diagnosis information (k), accelerator position (l), preset speed (m), and clutch, brakes, engine brake (n).

(for control of fuel quantity; and atmospheric pressure). Figure 24.2.4 is a schematic of an electronic engine control system on an in-line diesel fuel injection pump application.

Engine Control Modes Engine Crank and Start. During engine cranking, the goal is to get the engine started with the minimal amount of delay. To accomplish this, fuel must be delivered that meets the requirements for starting for any combination of engine coolant and ambient temperatures. For a cold engine, an increase in the commanded air-fuel ratio is required due to poor fuel vaporization and “wall wetting,” which decreases the amount of usable fuel. Wall wetting is the condensation of some of the vaporized fuel on the cold metal surfaces in the intake port and combustion chamber. It is critical that the fuel does not wet the spark plugs, which reduces the effectiveness of the spark plug and prevent the plug from firing. Should plug wetting occur, it may be impossible to start the engine. Engine Warm-Up. During the warm-up phase, there are three conflicting objectives: keep the engine operating smoothly (i.e., no stalls or driveability problems), increase exhaust temperature to quickly achieve operational temperature for the catalyst and lambda sensor so that closed-loop fuel control can begin operating, and keep exhaust emissions and fuel consumption to a minimum. The best method for achieving these objectives is very dependent on the specific engine application. Transient Compensation. During transitions such as acceleration or deceleration, the objective of the engine control system is to provide a smooth transition from one engine operating condition to another

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.35

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.35

(i.e., no hesitations, stalls, bumps, or other objectionable driveability concerns), and keep exhaust emissions and fuel consumption to a minimum. Full Load. Under steady-state full-load conditions, such as for climbing a grade, it is desirable to control the air/fuel mixture and ignition timing to obtain maximum power and to also limit engine and exhaust temperatures. Idle Speed Control. The objectives of the engine control system during idle are to provide a balance between the engine torque produced and the changing engine loads, thus achieving a constant idle speed even with various load changes due to accessories (i.e., air conditioning, power steering, and electrical loads) being turned on and off and during engagement of the automatic transmission. In addition the idle speed control must be able to compensate for long-term changes in engine load, such as the reduction in engine friction that occurs with engine break-in. The idle speed control must also provide the lowest idle speed that allows smooth running to achieve the lowest exhaust emissions and fuel consumption (up to 30 percent of a vehicle’s fuel consumption in city driving occurs during idling). To control the idle speed, the ECU uses inputs from the throttle position sensor, air conditioning, automatic transmission, power steering, charging system, engine RPM, and vehicle speed.

TRANSMISSION CONTROL* Since the introduction of electronic transmission control units in the early 1980s, the acceptance of the automatic transmission (AT) rose sharply. The market for ATs is divided into stepped and continuously variable transmissions (CVTs). For both types the driver gets many advantages. In stepped transmissions, the smooth shifts can be optimized by the reduction of engine torque during gear shift, combined with the correctly matched oil pressure for the friction elements (clutches, brake bands). The reduction of shift shocks to a very low or even to an unnoticeable level has allowed the design of five-speed ATs where a slightly higher number of gear shifts occur. With the CVT, one of the biggest obstacles to the potential reduction in fuel consumption by operating the engine at its optimal point is the power loss from the transmission’s oil pump. Only with electronic control is it possible to achieve the required yield by matching the oil mass-stream and oil pressure for the pulleys to the actual working conditions. To guarantee the overall economic solution for an electronically controlled transmission, either stepped or CVT, the availability of precision electrohydraulic actuators is imperative.

System Components Transmission. The greatest share of electronically controlled transmissions currently on the market consists of four- or five-speed units with a torque converter lockup clutch, commanded by the control unit. In a conventional pure hydraulic AT, the gear shifts are carried out by mechanical and hydraulic components. These are controlled by a centrifugal governor that detects the vehicle speed and a wire cable connected to the throttle plate lever. With an electronic shift point control, on the other hand, an electronic control unit detects and controls the relevant components. Present electronically controlled ATs usually have an electronically commanded torque converter clutch, which can lock up the torque converter between the engine output and the transmission input. The torque converter clutch is activated under certain driving conditions by a solenoid controlled by the electronic TCU. Locking up the torque converter eliminates the slip of the converter, and the efficiency of the transmission system is increased. This results in an even lower fuel consumption for cars equipped with ATs.

*Neuffer, K., and Engelsdorf, K., “Transmission Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (Ed.), McGraw-Hill, 1999, pp. 13.1–13.23.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.36

AUTOMOTIVE ON-BOARD SYSTEMS 24.36

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.5 Overview of hardware parts.

Electronic Control Unit. Another important component in electronic transmission control is the ECU, which is designed according to the requirements of the transmission and the car environments. The ECU can be divided into two main parts: the hardware and the corresponding software. The hardware consists of the housing, the plug, the carrier for the electronic devices, and the devices themselves. The housing, according to the requirements, is available as an unsealed design for applications inside the passenger compartment or within the luggage compartment. It is also possible to have sealed variants for mounting conditions inside the engine compartment or at the bulkhead. An overview of hardware parts is shown in Fig. 24.2.5. The software within the electronic transmission control system is gaining increasing importance because of the increasing number of functions, which, in turn, requires increasing software volume. The software for the control unit can be divided into two parts: the program and the data. The program structure is defined by the functions. The data are specific for the relevant program parts and have to be fixed during the calibration stage. The most difficult software requirements result from the real-time conditions coming from the transmission design. This is also the main criterion for the selection of the microcontroller (Fig. 24.2.6).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.37

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.37

FIGURE 24.2.6 Software structure overview.

Actuators. Electrohydraulic actuators are important components of the electronic transmission control systems. Continuously operating actuators are used to modulate pressure, while switching actuators functions as supply and discharge valves for shift-point control.

System Functions Functions can be designated as system functions if the individual components of the total electronic transmission control system cooperate efficiently to provide a desired behavior of the transmission and the vehicle. Basic Functions. The basic functions of the transmission control are the shift point control, the lockup control, engine torque control during shifting, related safety functions, and diagnostic functions for vehicle service. The pressure control in transmission systems with electrical operating possibilities for the pressure during and outside shifting can also be considered as a basic function. Figure 24.2.7 shows the necessary inputs and outputs as well as the block diagram of an electronic TCU suitable for the basic functions. Shift Point Control. The basic shift point control uses shift maps that are defined in data in the unit memory. These shift maps are selectable over a wide range. The shift point limitations are made, on the one hand, by the highest admissible engine speed for each application and, on the other hand, by the lowest engine speed that is practical for driving comfort and noise emission. The inputs of the shift point determination

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.38

AUTOMOTIVE ON-BOARD SYSTEMS 24.38

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.7 Structure of a basic transmission electronic control unit.

are the throttle position, the accelerator pedal position, and the vehicle speed (determined by the transmission output speed). Lockup Control/Torque Converter Clutch. The torque converter clutch connects both functional components of the hydraulic converter, the pump and the turbine. The lockup of the clutch reduces the power losses coming from the torque converter slip. To increase the efficiency of the lockup, it is necessary to close the clutch as often as possible but the activation of the lockup is a compromise between low fuel consumption and high driving comfort. Engine Torque Control During Shifting. The engine torque control requires an interface to an electronic engine management system. The target of the engine torque control, torque reduction during shifting, is to support the synchronization of the transmission and to prevent shift shocks. Pressure Control. The timing and absolute values of the pressure, which is responsible for the torque translation of the friction elements is aside from the engine torque reduction, the most important influence to shift

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.39

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.39

comfort. The electronic TCU offers additional possibilities for better function than a conventional hydraulic system. The pressure values during and outside shifting can be calculated by different algorithms or can be determined by characteristic maps. The inputs for a pressure calculation are engine torque, transmission input speed, turbine torque, throttle position, and so on. Safety and Diagnostic Functions. The functions, which are usually known as diagnostic functions of the electronic TCU, can be divided into real safety functions to prevent critical driving conditions and diagnostic functions that affect an increasing availability of the car and a better failure detection for servicing.

Improvement of Shift Control Shift Point Control. This basic function can be improved significantly by adding a software function, the socalled adaptive shift point control. This function requires only signals that are available in an electronic TCU with basic functions. The adaptive shift point control is able to prevent an often-criticized attribute, the tendency for shift hunting especially when hill climbing and under heavy load conditions. Lockup Control. There are some additional functions that can improve considerably the shift comfort of the lockup. In a first step, it is possible to replace the on/off control of the lockup actuator by a pulse control during opening and closing. This can be achieved using conventional hardware only by a software modification. In a further step, the on/off solenoid is replaced by a pressure regulator or a PWM solenoid. By coordinating intelligent control strategies and corresponding output stages within the electronic TCU, a considerable improvement of the shift behavior of the lockup results. Engine Torque Reduction During Gear Shifting. By an improved interface design to the engine management system, it is possible to extend the engine torque reduction function. It is necessary to use a PWM signal with related fixed values or a bus interface. The engine torque reduction is controlled directly by the TCU. Pressure Control. The pressure control can be improved in a similar way as the shift point control with an adaptive software strategy. The required inputs for the adaptive pressure control are calculated from available signals in the transmission control. The main reasons for the implementation of the adaptive pressure control are the variations of the attributes of the transmission components such as clutch surfaces and oil quality as well as the changing engine output torque over the lifetime of the car.

Adaptation to Driver’s Behavior and Traffic Situations In certain driving conditions, some disadvantages of the conventional AT can be prevented by using self-learning strategies. The intention of the self-learning functions is to provide the appropriate shift characteristic suitable to the driver under all driving conditions. Additionally, the behavior of the car under special conditions can be improved by suitable functions. The core of the adaptive strategies is the driver’s style detection, which can be detected by monitoring the accelerator pedal movements. Future Developments Independent of the type of automatic transmission, the shift or ratio control of the future will be part of a coordinated powertrain management, which, in turn, will be included in an overall control architecture encompassing all electronic systems of a vehicle. In the next years, development work on transmission control will concentrate on improving production costs, reliability, and size and weight, which are relevant to the product success in the market. A fundamental step toward these targets is higher integration of the components of the control system (i.e., combining mechanical, hydraulic, and electronic elements in mechatronic modules).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.40

AUTOMOTIVE ON-BOARD SYSTEMS 24.40

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.8 Cruise control system.

CRUISE CONTROL* A vehicle speed control system can range from a simple throttle-latching device to a sophisticated digital controller that constantly maintains a set speed under varying driving conditions. The next generation of electronic speed control systems will probably still use a separate module (black box), the same as present-day systems, but will share data from the engine, ABS, and transmission control systems. Futuristic cruise control systems that include radar sensors to measure the rate of closure to other vehicles and adjust the speed to maintain a constant distance are possible but need significant cost reductions for widespread private vehicle use. The objective of an automatic vehicle cruise control is to sustain a steady speed under varying road conditions, thus allowing the vehicle operator to relax from constant foot throttle manipulation. In some cases, the cruise control system may actually improve the vehicle’s fuel efficiency value by limiting throttle excursions to small steps. By using the power and speed of a microcontroller device and fuzzy logic software design, an excellent cruise control system can be designed. The cruise control system is a close-loop speed control as shown in Fig. 24.2.8. The key input signals are the driver’s speed setpoint and the vehicle’s actual speed. Other important inputs are the faster-accel/slowercoast driver adjustments, resume, on/off, brake switch, and engine control messages. The key output signals are the throttle control servo actuator values. Additional output signals include cruise ON and service indicators, plus messages to the engine and/or transmission control system and possibly data for diagnostics. The ideal cruise system features would include the following specifications:

• Speed performance: ±0.5 percent m/h control at less than 5 percent grade, and ±1 m/h control or vehicle limit over 5 percent grade. • Reliability: Circuit designed to withstand overvoltage transients, reverse voltages, and power dissipation of components kept to minimum. • Application options: By changing EEPROM via a simple serial data interface or over the multiplexing network, the cruise software can be upgraded and optimized for specific vehicle types. These provisions allow for various sensors, servos, and speed ranges. *Valentine,

R., “Cruise Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 14.1–14.10.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.41

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.41

• Driver adaptability: The response time of the cruise control can be adjusted to match the driver’s preferences within the constraints of the vehicle’s performance.

• Favorable price-to-performance ratio: The use of integrated actuator drivers and a high-functionality MCU reduce component counts, increase reliability, and decrease the cruise control module’s footprint. The MCU for cruise control applications requires high functionality. The MCU would include the following: a precise internal timebase for the speed measurement calculations, A/D inputs, PWM outputs, timer input capture, timer output compares, serial data port (MUX), internal watchdog, EEPROM, and low-power CMOS technology. Insofar as cruise control software is concerned, the cruise error calculation algorithm can be designed around traditional math models such as PI or fuzzy logic. Fuzzy logic allows somewhat easier implementation of the speed error calculation because its design syntax uses simple linguistics. For example, if speed difference is negative and small, then increase throttle slightly. The output is then adjusted to slightly increase the throttle. The throttle position update rate is determined by another fuzzy program that looks for the driver’s cruise performance request (slow, medium, or fast reaction), the application type (small, medium, or large engine size), and other cruise system factor preset parameters.

ADAPTIVE CRUISE CONTROL* Adaptive cruise control (ACC) introduces a function to automobiles, which relieves the driver of a significant amount of the task of driving, in a comfortable manner. ACC differs from other vehicle control functions, especially in that the function is performed by several ECUs. While conventional control systems consist of a sensor and actuator environment around a central ECU, ACC adds functions to existing systems. A truly new component is the sensor for measuring the distance, relative speed, and lateral position of potential target vehicles, using laser optics or millimeter waves. This component often contains the logic for controlling vehicle movement. The latter is effected by commands to ECUs for engine and brake control electronics. Existing components are also used or adapted for operator use and display. The example in Fig 24.2.9 shows all basic components of an ACC system as a broad overview.

FIGURE 24.2.9 Basic components for typical adaptive cruise control.

*Winner, H., “Adaptive Cruise Control,” in Automotive Electronics Handbook, Ronald K. Jurgen, (ed.), McGraw-Hill, 1999, pp. 30.1–30.30.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.42

AUTOMOTIVE ON-BOARD SYSTEMS 24.42

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.10 Levels of ACC signal processing and control.

The significant new function of ACC, compared to cruise control, consists of following a preceding vehicle that is being driven slower than the desired speed set by the driver of the ACC vehicle. If the preceding vehicle is being driven at a constant speed, the ACC vehicle follows at the same speed at an approximately constant distance. In general, this distance depends on the speed. If the sensor’s field of view in sharp curves is not sufficient to maintain detection with the target vehicle while following it at an adequate distance, then resumption of set-speed control can be delayed. An additional reason for reducing the acceleration and possibly even the current speed can be to achieve a comfortable reduction in the lateral acceleration while in the curve. The ACC system, with its ACC sensor for monitoring the environment, and its access to vehicle dynamics variables and driver information systems, provides a good basis for simple collision warning functions. Currently known ACC designs target use on freeways and other high-speed roads. Extension to urban traffic and to automatic traffic congestion following is desirable, but has not yet been satisfactorily achieved with the technology currently available. An especially severe limitation of the ACC function is the discrimination of stationary targets. The result of this is that ACC only considers objects for which a minimum speed has been measured (typical values are 20 km/h or 20 percent of the speed of the ACC vehicle). This limitation is necessary not only because of the sensor’s limited capabilities, but also because of the control definition.

Signal Processing and Control Even though it can be assumed that there are as many controller variations as there are ACC development teams, it is still possible to define a common level structure (Fig. 24.2.10) that is valid for most variations. ACC Sensor The most elementary task of the ACC sensor is to measure the distance to preceding vehicles in a range that extends from about 2 to 150 m. A maximum allowable error of about 1 m or 5 percent is generally adequate. The relative speed toward the target vehicle is of special importance for the ACC control cycle, since this variable is much more strongly weighted in the controller. It should be possible to detect all relevant speeds of possible target objects. Since several targets can be present in the sensor range, multiple target capability is very important. This means especially the capability of distinguishing between relevant objects in the ACC vehicle’s lane and irrelevant objects (e.g., in the next lane). This can be achieved by a high capability of distinction of at least one measurement variable (distance, relative speed, or lateral position). Especially in Japan, infrared laser light sources with wavelengths around l = 800 nm are used as ACC sensors. They are often called Lidar (light detection and ranging) or sometimes laser-radar. Their basic element (Fig. 24.2.11) is a laser diode, which is stimulated by a pulse controller to emit short light pulses. Another type of ACC sensor is a millimeter-wave radar sensor operating in the frequency band from 76 to 77 GHz. The use of the Doppler shift for direct determination of the relative speed is a prominent feature of millimeter-wave

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.43

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.43

FIGURE 24.2.11 Lidar block diagram.

technology. It allows a high separation capability of objects with this variable and delivers the primary control variable for ACC at high quality. ACC Controller The basic structure of an ACC controller is shown in Fig. 24.2.12. The first step is to select the relevant preceding vehicle, which is done by comparing the object data with the geometry of the predicted course. Once the target vehicle is selected (several vehicles may be in the range of the predicted course), the required acceleration is calculated based on the distance and the relative speed.

BRAKING CONTROL* The braking force generated at each wheel of a vehicle during a braking maneuver is a function of the normal force on the wheel and the coefficient of friction between the tire and the road, which is not a constant. It is a function of factors, most prominent being type of road surface and the relative longitudinal slip between the tire

FIGURE 24.2.12 Basic structure of the ACC controller.

*Cage,

J. L., “Braking Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen, (ed.), McGraw-Hill, 1999, pp. 15.1–15.17.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.44

AUTOMOTIVE ON-BOARD SYSTEMS 24.44

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.13 Disk brake schematic.

FIGURE 24.2.14 Drum brake schematic.

and the road. Another characteristic of automotive tires important in braking is lateral force versus slip. Lateral force is the force keeping a tire from sliding in a direction normal to the direction of the vehicle. The lateral coefficient of friction drops off quickly once a wheel begins to slip longitudinally, as can happen during braking. Excessive wheel slip at the rear wheels of a vehicle and the resulting loss of lateral frictional force will contribute to instability as the rear of the vehicle tends to slide sideways with relatively small lateral forces on the vehicle. Excessive wheel slip and the resulting loss of lateral friction force on the front wheels of a vehicle will contribute to loss of steerability.

Brake System Components Figure 24.2.13 shows a schematic of a disc brake. In this type of brake, force is applied equally to both sides of a rotor and braking action is achieved through the frictional action of inboard and outboard brake pads against the rotor. Figure 24.2.14 depicts a schematic diagram of a drum brake. In drum brakes, force is applied to a pair of brake shoes in a variety of configurations. In addition to the brakes, other brake system components include a booster and master cylinder, and a proportioning valve.

Antilock Systems Although antilock concepts have been known for decades, widespread use of antilock (also called antiskid and ABS) began in the 1980s with systems developed with digital microprocessors/microcontrollers replacing the earlier analog units. A conventional antilock system consists of a hydraulic modulator and hydraulic power source that may or may not be integrated with the system master cylinder and booster, wheel speed sensors, and an ECU. The fundamental function of an antilock system is to prohibit wheel lock by sensing impending wheel lock and taking action through the hydraulic modulator to reduce the brake pressure in the wheel sufficiently to bring the wheel speed back to the slip level range necessary for near-optimum braking performance. Antilock components include wheel speed sensors, hydraulic modulators, electric motor/pump, and an ECU (Fig. 24.2.15).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.45

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.45

FIGURE 24.2.15 Electronic control unit block diagram.

Future Vehicle Braking Systems A number of trends are developing relative to future vehicle braking systems. Some of the prominent ones are as follows:

• • • • • • •

The expanding use of multiplexing Vehicle dynamics control during nonbraking as well as braking maneuvers Collision-avodiance systems Regenerative braking in electric vehicles Brake-assist systems Electrohydraulic brake by wire Electric actuators

TRACTION CONTROL* Traction control systems or ASRs designed to prevent the drive wheels from spinning in response to application of excess throttle have been on the market since 1987. Vehicles with powerful engines are particularly susceptible to drive-wheel slip under acceleration from standstill and/or on low-traction road surfaces. The results include attenuated steering response on front-wheel-drive (FWD) vehicles, diminished vehicle stability on rear-wheel-drive (RWD) cars, and both on four-wheel-drive (4WD) cars—depending on their concept. One technique for optimizing traction is to apply braking force to the spinning wheel. A second option is the application of fixed, variable, or controlled differential-slip limitation mechanisms. These provide fixed coupling to ensure equal slippage rates at the drive wheels, thereby allowing them to develop maximum accelerative force. System Arrangements The demand on the reaction time from the intervention up to the effect at the driven wheels depends on the drive concept. Different philosophies of the optimization of driving stability, traction, and comfort (and also the costs of the setting element) bring about the existence of different ASR system arrangements. Table 24.2.2 shows different types of engine interventions and their setting elements. Figure 24.2.16 shows an ASR system overview.

*Sauter,

T., “Traction Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 16.1–16.19.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.46

AUTOMOTIVE ON-BOARD SYSTEMS 24.46

AUTOMOTIVE ELECTRONICS

TABLE 24.2.2 Different Types of Engine Intervention Type of intervention

Setting element

Airflow control Fuel-injection suppression

Electronic throttle control (ETC) Ignition and fuel-injection unit (MOTRONIC) Ignition and fuel-injection unit (MOTRONIC)

Spark retard

Control Algorithm The wheel velocities are determined by the speed sensors in the ABS/ASR control unit. If the drive wheels spin as a result of too high engine torque, the ASR determines a transferable moment, which corresponds to the friction coefficient and which is transferred by a data bus to the engine control unit. Meanwhile the spinning wheels are decelerated by the ASR hydraulic unit. By the reduction of engine torque, which can either be effected by closing the throttle valve or by injection suppression and spark retard, wheel slip is limited to such a degree that the lateral forces can be transferred again and the vehicle gets stabilized. By the influence of the transmission control with vehicles with automatic transmissions, additional safety can be obtained, for example, by early shifting into a higher gear or by preventing downshifting when cornering. Figure 24.2.17 demonstrates the interaction of different ECUs and setting elements of a traction control system.

STABILITY CONTROL* Driving a car at the physical limit of adhesion between the tires and the road is an extremely difficult task. Most drivers with average skills cannot handle those situations and will lose control of the vehicle. Several solutions for the control of vehicle maneuverability in physical-limit situations have been published.

FIGURE 24.2.16 ASR system overview.

*Van Zanten, A., Erhardt, R. Landesfeind, K., and Pfaff, G., “Stability Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 17.1–17.33.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.47

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.47

FIGURE 24.2.17 ASR control loop.

Physical Concept Since the nominal trajectory desired by the driver is unknown, the driver’s inputs are taken to obtain nominal state variables that describe the desired vehicle motion instead. These inputs are the steering wheel angle, the engine drive torque as derived from the accelerator pedal position, and the brake pressure. To determine which state variables describe the desired motion best, a special situation is taken (Fig. 24.2.18). In this illustration a vehicle is shown negotiating a turn after a step input in the steering angle. The lower curve shows the path the vehicle will follow if its lateral acceleration is smaller than the coefficient of friction of the road for the given tires. In this case the vehicle follows the nominal motion. If the road is slippery, with a coefficient of friction smaller than the nominal lateral acceleration, the vehicle will not follow the nominal value and the radius of the turn will become larger than that of the nominal motion. One of the basic state variables that describe the lateral motion of the vehicle is its yaw rate. It therefore seems reasonable to design a control system that makes the yaw rate of the vehicle equal to the yaw rate of the nominal motion (yaw rate control). If this control is used on the slippery road, the lateral accelFIGURE 24.2.18 VDC versus yaw rate control. eration and the yaw rate will not correspond to each other as they do during the nominal motion. The slip angle of the vehicle increases rapidly as is shown by the upper curve in Fig. 24.2.18. Therefore both the yaw rate and the slip angle of the vehicle must be limited to values that correspond to the coefficient of friction of the road. For this reason in the Bosch vehicle dynamics control (VDC) system, both the yaw rate and the vehicle slip angle are taken as the nominal state variables and thus as the controlled variables. The result is shown by the curve in the middle of Fig. 24.2.18. It requires the installation of a yaw rate sensor and a lateral acceleration sensor.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.48

AUTOMOTIVE ON-BOARD SYSTEMS 24.48

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.19 Brake slip controller.

Control Concept From the driver’s inputs, measured by a steering wheel angle sensor, a throttle position sensor, and a brake pressure sensor, the nominal behavior as described by the nominal values of the controlled variables must be determined. From the signal values of the wheel speed sensors, the yaw rate sensor, and the lateral acceleration sensor, the actual behavior of the vehicle as described by the actual values of its controlled variables is determined. The difference between the nominal and the actual behavior is then used as the set of actuating signals of the VDC. The Slip Controller. The slip controller consists of two parts—the brake slip controller (Fig. 24.2.19) and the drive slip controller (Fig. 24.2.20). Depending on which slip controller is used, different nominal variables are passed by the VDC to the slip controller. The drive slip controller is used only for the slip control of the driven wheels during driving. Otherwise the brake slip controller is used.

FIGURE 24.2.20 Drive slip controller.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.49

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.49

FIGURE 24.2.21 Block diagram of a typical ECU for VDC.

Electronic Control Units There are two different types of ECUs. One is attached to the hydraulic unit and the other is separated from it. Figure 24.2.21 shows a typical separated ECU.

SUSPENSION CONTROL* The function of a suspension system in an automobile is to improve ride comfort and stability. An important consideration in suspension design is how to obtain both improved ride comfort and stability, since they are normally in conflict. Advances in electronic control technology, applied to the automobile, can resolve this conflict. Shock Absorber Control System There are three main parts of a damping control system: a damping control device (actuator), sensors, and software (control stategy). Optimum damping forces should be set for various running conditions in order to improve ride comfort and handling stability. One damping control system configuration is shown in Fig. 24.2.22. It uses five sensors to detect running conditions: vehicle speed sensor, steering angle sensor, acceleration and deceleration sensor, a brake sensor, and a supersonic sensor to detect road conditions. Control signals are sent to adjust the damping force of the variable shock absorbers to optimum values. Hydropneumatic Suspension Control System A hermetically sealed quantity of gas is used in the hydropneumatic suspension control system. The gas and hydraulic oil are separated by a rubber diaphragm (Fig. 24.2.23). The mechanical springs are replaced by gas. The shock absorber damping mechanism is achieved by the orifice fitted with valves. *Yohsuke, A., “Suspension Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 18.1–18.19.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.50

AUTOMOTIVE ON-BOARD SYSTEMS 24.50

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.22 Principal components of a damping control system.

Electronic Leveling Control System An electronic leveling control system (Fig. 24.2.24) permits a low spring rate to achieve good ride comfort independent of load conditions, an increase in vehicle body height on rough road surfaces, and a changing spring rate and damping force in accordance with driving conditions and road surfaces.

Active Suspension An active suspension (Fig. 24.2.25) is one where energy is supplied constantly to the suspension and the force generated by that energy is continuously controlled. The suspension incorporates various types of sensors and a unit for processing their signals that generate forces that are a function of the signal outputs.

FIGURE 24.2.23 (a) Hydropneumatic suspension system; (b) Controllable hydropneumatic suspension system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.51

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

FIGURE 24.2.24 Principal components of an air-suspension system.

FIGURE 24.2.25 Hydraulic and control system for an active suspension.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

24.51

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.52

AUTOMOTIVE ON-BOARD SYSTEMS 24.52

AUTOMOTIVE ELECTRONICS

TABLE 24.2.3 Functions Required for Electronically Controlled Power Steering Reduction of driver’s burden when turning the steering wheel and improvement in the steering feel

Power saving Failsafe

• • • • • •

Reduction in steering effort Smoothness of steering operation Feedback of appropriate steering reaction forces Reduction of kickback1 Improvement in convergence2 Creation of other new functions

• Maintaining of manual steering function in the event of any malfunctions

STEERING CONTROL* Electronically Controlled Power Steering Electronically controlled power steering improves steering feel and power-saving effectiveness and increases steering performance. It does so with control mechanisms that reduce the steering effort. An electronic control system, for example, may be added to the hydraulic booster or the whole system may be composed of electronic and electric components. The intent of electronic controls, initially, was to reduce the steering effort when driving at low speeds and to supply feedback for the appropriate steering reaction force when driving at high speeds. In order to achieve these goals, devices such as vehicle speed sensors were used to detect vehicle speed in order to make smooth and continuous changes in the steering assist rate under conditions ranging from steering maneuvers at zero speed to those at high speeds. However, as vehicles became equipped with electrohydraulic systems and fully electronic and electric systems, the emphasis for these systems started to include reduction in power requirements and higher performance. The main functions required for electronically controlled power steering are listed in Table 24.2.3 and the various types of electronically controlled power systems are given in Table 24.2.4. Electric power steering is a fully electric system, which reduces the amount of steering effort by directly applying the output from an electric motor to the steering system. This system consists of vehicle speed sensors, a steering sensor (torque, angular velocity), an ECU, a drive unit, and a motor (Fig. 24.2.26). Hybrid systems use a flow control method in which the hydraulic power steering pump is driven by an electric motor. The steering effort is controlled by controlling the rotating speed of the pump (discharge flow).

Four-Wheel Steering Systems (4WS) For vehicles with extremely long wheel bases and vehicles that need to be operated in narrow places, the concept of a four-wheel steering system is attractive. In such systems, the rear wheels are turned in the opposite direction to the steering direction of the front wheels in order to make the turning radius as small as possible and to improve the handling ability. Four-wheel steering systems that are currently being implemented in vehicles are classified according to their functions and mechanisms. The aims and characteristics of each system are briefly explained in Tables 24.2.5 and 24.2.6.

*Sato, Makoto, “Steering Control,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 19.1–19.33.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.53

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.53

TABLE 24.2.4 Classification of Electronically Controlled Power Steering System Major effects

Current

Actuator

Steering force responsive to vehicle speed

Sensors Basic structure

Control method

Control objects

Vehicle Steering speed torque

Angular velocity

Power saving 

Flow

Flow supply to power cylinder





Solenoid



Cylinder bypass

Effective actuation pressure given to cylinder





Solenoid



Valve characteristics

Pressure generated at control value





Solenoid



Hydraulic reaction force control

Pressure acting on the hydraulic reaction force mechanism





Solenoid



Hybrid system

Flow

Flow supply to power cylinder







Motor





Full electric

Current

Motor torque







Motor





system

Voltage

Motor power







Motor





Electronically controlled hydraulic system





FIGURE 24.2.26 Structure of an electric power steering system (rack assisttype ball screw drive).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.54

AUTOMOTIVE ON-BOARD SYSTEMS 24.54

AUTOMOTIVE ELECTRONICS

TABLE 24.2.5 Four-Wheel Steering Systems: Functions and Aims Classification by functions Small range of rear steer angle only controlled electronically Not only small range in medium to high speed but large range in low speed of rear steering angle are controlled electronically

Aims Improvement of steering response and vehicle stability in medium to high speed In addition to the above, making the minimum turning radius small

OBJECT DETECTION, COLLISION WARNING, COLLISION AVOIDANCE* Although object detection and collision-avoidance systems may still be regarded as being in their infancy, their perceived value in enhancing safety and reducing accidents is high. Two categories of systems are currently available or under development: passive collision-warning systems and active collision-avoidance systems. A passive system will detect a hazard and alert the driver to risks, whereas an active system will detect the hazard and then take preventive action to avoid a collision. Both types require object detection. The only difference between them is in how a collision-diverting event is actuated following object detection—by the driver or automatically. Development engineers are proceeding cautiously with active-collision work. Any systems that take control of the brakes (and, in the future, steering) from the driver are potential sources of litigation, particularly in North America. Standards and federal regulations will emerge during the next few years that will cover such systems. Active and Passive Systems A passive collision-warning system (Fig. 24.2.27) includes a visual and/or audible warning signaled to the driver, but there is no intervention to avoid a collision. An active collision-warning system (Fig. 24.2.28) interacts with the powertrain, braking, and even the steering systems with the objective of sensing objects that present a collision risk with the host vehicle, then taking preventative measures to avoid an accident. The most essential element in both passive and active systems is the object detection system. A key difference between the object detection systems used in active and passive systems is that the active system will require more accurate object recognition, so as to prevent collision-avoidance maneuvers against objects such as road signs. Vehicular Systems There are three types of warning systems: front, rear, and side. Frontal systems, both active and passive, operate on the same principles of object detection. There are different techniques used for obstacle detection.

TABLE 24.2.6 Four-Wheel Steering Systems: Mechanisms and Features Classification by mechanism Full mechanical system Electronic-hydraulic system Electronic-mechanical-hydraulic system Full electric system (electronic-electric system)

Feature Simple mechanism High degree for control freedom (compact actuator) High degree of freedom (mechanism is not simple) High degree of control freedom simple mechanism

*Bannatyne, R., “Object Detection, Collision Warning, Collision Avoidance,” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 29.3–29.22.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.55

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

FIGURE 24.2.27 Passive collision-warning system.

24.55

FIGURE 24.2.28 Active collision-warning system.

The main approaches are a scanning laser radar sensor, a frequency modulated constant wavelength (FMCW), or a video camera used in conjunction with an algorithm that will detect hazardous objects. This detection system is usually mounted at the front of the host vehicle to detect objects in the vehicle’s forward path. Other techniques may involve a combination of different sensors, including backup sensors. To be a hazardous obstacle, the target must lie in the corridor or the intended path for the host vehicle (Fig. 24.2.29). Rear warning systems can use shorter-range, often nonscanning sensors to provide close-range sensing for parking assist capability, or scanning radar for more advanced sensing capability. Some systems use a combination of both types of sensors. Side-warning systems use radar sensors to detect objects in the traditional blind spots that are often responsible for causing accidents. The sensors are mounted in the rear quarter area of the vehicle and detect objects in adjacent lanes. An active collision-avoidance system interacts with other vehicle systems, as well as providing a warning (usually via the audio system and perhaps a head-up display). The most critical system with which it interacts is the braking system. A simplified algorithm that senses hazards and adjusts brake pressure accordingly is shown in Fig. 24.2.30. Future Systems A fully equipped, high-end system, such as that shown in Fig. 24.2.31, would support an advanced system such as the automated highway. The collision-avoidance electronic control unit is at the center of the diagram. The frontal vehicular detection system features two sensors: a 77-GHz FMCW and a camera system. Data from both of these sources are fused to map out a reliable picture of the potential hazards.

FIGURE 24.2.29 Trajectory corridor.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.56

AUTOMOTIVE ON-BOARD SYSTEMS 24.56

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.30 Object detection system with braking system interaction.

FIGURE 24.2.31 Complete “high-end” collision-avoidance system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.57

AUTOMOTIVE ON-BOARD SYSTEMS AUTOMOTIVE ON-BOARD SYSTEMS

24.57

NAVIGATION AIDS AND DRIVER INFORMATION SYSTEMS* Navigation aids and driver information systems provide various combinations of communicated, stored, and derived information to aid drivers in effectively planning and executing trips. Navigation aids and driver information systems are a major subset of advanced traveler information systems, which, in turn, are a subset of intelligent transportation systems (ITS) as discussed in Chap. 25.3.

Automobile Navigation Technologies Positioning technologies are fundamental requirements of both vehicle navigation systems and vehicle tracking systems. Almost all current vehicular navigation systems include a global positioning system (GPS) receiver, and most also include dead reckoning with map matching. Map matching as well as route guidance must be supported by digital road maps. Another important supporting technology is mobile data communications for traffic and other traveler information. Radiopositioning. Radiopositioning is based on processing special signals from one or more radio transmitters at known locations to determine the position of the receiving equipment. Although a number of radiopositioning technologies are potentially applicable, satellite-based GPS is by far the most popular. GPS, which is operated by the U.S. Department of Defense, includes 24 satellites spaced in orbits such that a receiver can determine its position by simultaneously analyzing the travel time of signals from at least four satellites. Proximity beacons provide another form of radiopositioning, which is used in some vehicle navigation systems, particularly those that also use proximity beacons for communications purposes. Proximity beacons are installed at key intersections and other strategic roadside locations that communicate their location and/or other information to receivers in passing vehicles via very short-range radio, microwave, or infrared signals. The reception of a proximity beacon signal means that the receiving vehicle is within 50 m or so of the beacon, and provides an occasional basis for confirming vehicle position. Dead Reckoning. Dead reckoning, the process of calculating location by integrating measured increments of distance and direction of travel relative to a known location, is used in virtually all vehicle navigation systems. Dead reckoning gives a vehicle’s coordinates relative to earlier coordinates. Distance measurements are usually made with an electronic version of the odometer. Electronic odometers provide discrete signals from a rotating shaft or wheel, and a conversion factor is applied to obtain the incremental distance associated with each signal. Vehicle heading may be measured directly with a magnetic compass, or indirectly by keeping track of heading relative to an initial heading by accumulating incremental changes in heading. Digital Road Maps. The two basic approaches to digitizing maps are matrix encoding and vector encoding. A matrix-encoded map is essentially a digitized image in which each image element or pixel, as determined by an X-Y grid with arbitrary spacing, is defined by digital data-giving characteristics such as shade or color. The vector-encoding approach applies mathematical modeling concepts to represent geometrical features such as roadways and boundaries in abstract form with a minimum of data. By considering each road or street as a series of straight lines and each intersection as a node, a map may be viewed as a set of interrelated nodes, lines, and enclosed areas. Map Matching. Map matching is a type of artificial intelligence process used in virtually all vehicle navigation and route guidance systems that recognize a vehicle’s location by matching the pattern of its apparent

*French, R. L., and Krakiwsky, E. J., “Navigation Aids and Driver Information Systems,” in Automotive Electronics Handbook, Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 31.1–31.15.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.58

AUTOMOTIVE ON-BOARD SYSTEMS 24.58

AUTOMOTIVE ELECTRONICS

FIGURE 24.2.32 Typical components and subsystems of vehicle navigation system.

path (as approximated by dead reckoning and/or radiopositioning) with the road patterns of digital maps stored in computer memory. Most map-matching software may be classified as either semideterministic or probabilistic.

Navigation Systems Figure 24.2.32 is a block diagram showing the major elements of a typical automobile navigation system. Distance and heading (or heading change) sensors are almost invariably included for dead-reckoning calculations, which, in combination with map matching, form the basic platform for keeping track of vehicle location. However, dead reckoning with map matching has the drawback of occasionally failing due to dead-reckoning anomalies, extensive travel off mapped roads, ferry crossings, and so forth. The “location sensor” indicated by dashed lines in Fig. 24.2.32 is an optional means of providing absolute location to avoid occasional manual reinitialization when dead reckoning with map matching fails. Although proximity beacons serve to update vehicle location in a few systems, almost all state-of-the-art systems used GPS receivers instead.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.59

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 24.3

INTELLIGENT TRANSPORTATION SYSTEMS* Ronald K. Jurgen

INTRODUCTION Intelligent transportation systems (ITS) is an umbrella term that covers the application of a wide variety of communication, positioning, sensing, control, and other information-related technologies to improve the efficiency, safety, and environmental aspects of surface transportation. Major categories of ITS include advanced traffic management systems, advanced public transportation systems, advanced traveler information systems, advanced vehicle safety systems, and commercial vehicle operations. The U.S. National ITS Architecture was developed under a three-year program funded by the U.S. Department of Transportation (DOT) that included a parallel consensus-building effort with industry and local governments. The architecture provides a consistent basis for establishing standards and assuring ITS interoperability throughout the country. To this end, ITS installations by state and local governments using federal funding are required by legislation to be consistent with the National Architecture. Table 24.3.1 gives a comprehensive list of ITS user services that are candidates for an emerging ISO standard based in part on program information from many countries around the world. Although the level of emphasis for individual user services may vary greatly from country to country, generally similar ITS user services are defined in the United States, Europe, and Asia. Table 24.3.2 gives a brief statement of the function of each of the 29 user services originally defined by the National ITS Program Plan developed jointly by DOT and ITS America.

SYSTEM ARCHITECTURE The system architecture identifies necessary subsystems, defines the functions to be performed by each subsystem, and identifies the data that must flow among them, thus providing a consistent basis for system design. In particular, it defines the input, output, and functional requirements of several hundred individual processes required for implementing ITS. More important, the architecture defines the intricate interconnections required among the myriad subsystems in order to synergistically integrate ITS services ranging from advanced traffic management, which has already been implanted extensively, to future automated highway systems. It is important to understand, *French, R. L., and Chen, K., “Intelligent Transportation Systems (ITS),” in Automotive Electronics Handbook, 2nd ed., Ronald K. Jurgen (ed.), McGraw-Hill, 1999, pp. 32.1–32.11.

24.59 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.60

INTELLIGENT TRANSPORTATION SYSTEMS* 24.60

AUTOMOTIVE ELECTRONICS

TABLE 24.3.1 ITS User Services Service category

Service no.

Service name

Traveler information (ATIS)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Pretrip information On-trip driver information On-trip public transport information Personal information services Route guidance and navigation Transportation planning support Traffic control Incident management Demand management Policing/enforcing traffic regulations Infrastructure maintenance management Vision enhancement Automated vehicle operation Longitudinal collision avoidance Lateral collision avoidance Safety readiness Precrash restraint deployment Commercial vehicle preclearance Commercial vehicle administrative processes Automated roadside safety inspection Commercial vehicle onboard safety monitoring Commercial vehicle fleet management Public transport management Demand-responsive transport management Shared transport management Emergency notification and personal security Emergency vehicle management Hazardous materials and incident notification Electronic financial transactions Public travel security Safety enhancement for vulnerable road users Intelligent junctions

Traffic management (ATMS)

Vehicle (AVCS)

Commercial vehicle (CVO)

Public transport (APTS)

Emergency (EM)

Electronic payment Safety

however, that the architecture is neither a design nor a set of physical specifications. Instead the architecture provides a basis for developing standards and protocols required for exchanging data among subsystems.

Logical Architecture The logical architecture presents a functional interpretation of the ITS user services, which is divorced from likely implementations and physical interface requirements. In particular, it defines all functions (called process specifications) necessary to perform the user services and indicates the data flows required among these functions.

Physical Architecture The physical architecture identifies the physical subsystems and the architecture flows between subsystems that implement the processes and support the data flows of the logical architecture. It assigns processes from the logical architecture to each of the subsystems and defines the data flows between the subsystems based on the data exchange implied by the process specifications. The physical architecture consists of the four top-level categories of subsystems shown in Fig. 24.3.1.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.61

INTELLIGENT TRANSPORTATION SYSTEMS* INTELLIGENT TRANSPORTATION SYSTEMS

24.61

TABLE 24.3.2 Functions of ITS User Services User service En route driver information Route guidance Traveler information services Traffic control Incident management Emissions testing and mitigation Demand management and operations Pretrip travel information Ride matching and reservation Public transportation En route transit information Personalized public transit Public travel security Electronic payment services Commercial vehicle electronic clearance Automated roadside safety inspections Onboard safety monitoring Commercial vehicle administration Hazardous material incident response Freight mobility Emergency notification Emergency vehicle management Longitudinal collision avoidance Lateral collision avoidance Intersection collision avoidance Vision enhancement for crash avoidance Safety readiness Precrash restraint deployment Automated highway systems

Function Provides driver advisories and in-vehicle signing for convenience and safety Provides travelers with simple instructions on how to best reach their destinations Provides a business directory of “yellow pages” containing service information Manages the movement of traffic on streets and highways Helps public and private organizations identify incidents quickly and implement a response to minimize their effects on traffic Provides information for monitoring air quality and developing air quality improvement strategies Supports policies and regulations designed to mitigate the environmental and social impacts of traffic congestion Provides information for selecting the best transportation mode, departure time, and route Makes ride sharing easier and more convenient Automates operations, planning, and management functions of public transit systems Provides information to travelers using public transportation after they begin their trips Provides flexibility-routed transit vehicles to offer more convenient customer service Creates a secure environment for public transportation patrons and operators Allows travelers to pay for transportation services electronically Facilitates domestic and international border clearance, minimizing stops Facilitates roadside inspections Senses the safety status of a commercial vehicle, cargo, and driver Provides electronic purchasing of credentials and automated mileage and fuel reporting and auditing Provides immediate description of hazardous materials to emergency responders Provides communication between drivers, dispatchers, and intermodal transportation providers Provides immediate notification of an incident and an immediate request for assistance Reduces the time needed for emergency vehicles to respond to an incident Helps prevent head-on, rear-end, or backing collisions between vehicles, or between vehicles and objects or pedestrians Helps prevent collisions when vehicles leave their lane of travel Helps prevent collisions at intersections Improves the driver’s ability to see the roadway and objects that are on or along the roadway Provides warnings about the conditions of the driver, the vehicle, and the roadway Anticipates an imminent collision and activates passenger safety systems before the collision occurs, or much earlier in the crash event than would otherwise be feasible Provide a fully automated, hands-off operating environment

FUTURE DIRECTIONS A new DOT initiative in 1998 is the Intelligent Vehicle Initiative (IVI). The objective of the IVI is to improve significantly the safety and efficiency of motor vehicle operations by reducing the probability of crashes. To accomplish this, the IVI will accelerate the development, availability, and use of driving-assistance and controlintervention systems to reduce deaths, injuries, property damage, and the societal loss that results from motor vehicle crashes. These systems include provisions for warning drivers, recommending control

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_24.qxd

10/28/04

11:25 AM

Page 24.62

INTELLIGENT TRANSPORTATION SYSTEMS* 24.62

AUTOMOTIVE ELECTRONICS

FIGURE 24.3.1 National ITS architecture for the United States.

actions, intervening with driver control, and introducing temporary or partial automated control of the vehicle in hazardous situations. The improve safety category includes nine candidate IVI services: rear-end collision avoidance, road departure collision avoidance, lane-change and merge collision avoidance, intersection collision avoidance, railroad crossing collision avoidance, vision enhancement, location-specific alert and warning, automatic collision notification, and smart restraints and occupant protection systems. The safety impacting category includes four services: navigation/routing, real-time traffic and traveler information, driver comfort and convenience, and vehicle stability warning and assistance.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.1

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

SECTION 25

INSTRUMENTATION AND TEST SYSTEMS The increasing importance of instrumentation in all phases of design, development, manufacture, deployment, and maintenance of electronic systems has prompted us to add this entirely new section to this edition of the handbook. It complements Sec. 15, Measurement Circuits, but deals with the instruments themselves and how they are used. The first two chapters are devoted to instruments for measuring basic parameters: current, voltage, frequency, and time. Chapter 25.3 covers signal sources, and Chap. 25.4 logic analyzers. Arguably the most powerful instrument, the oscilloscope, is treated in some detail in Chap. 25.5. During its 60-year history, it has been constantly improved in performance and sophistication. Chapter 25.6 deals with reconfigurable instruments. One example would be configuring a portable ground support test system for military avionics from VXI modules configured in a dual-rack structure. Finally, the embedment of sensor- and computer-based instrumentation into complex systems can enable the implementation of self-repair concepts. The editors wish to thank Stephen E. Grossman, a consultant to Agilent Technologies, for his invaluable assistance in organizing this section. C.A.

In This Section: CHAPTER 25.1 INSTRUMENTS FOR MEASURING FREQUENCY AND TIME INTRODUCTION FREQUENCY COUNTERS UNIVERSAL COUNTERS CW MICROWAVE COUNTERS FREQUENCY AND TIME-INTERVAL ANALYZERS SPECIFICATIONS AND SIGNIFICANCE

25.3 25.3 25.3 25.3 25.5 25.5 25.5

CHAPTER 25.2 INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE INTRODUCTION AC VOLTAGE MEASUREMENT TECHNIQUES CURRENT MEASUREMENT TECHNIQUES RESISTANCE MEASUREMENT TECHNIQUES

25.8 25.8 25.12 25.13 25.14

CHAPTER 25.3 SIGNAL SOURCES INTRODUCTION KINDS OF SIGNAL WAVEFORMS HOW PERIODIC SIGNALS ARE GENERATED TYPES OF SIGNAL GENERATORS

25.17 25.17 25.17 25.17 25.25

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.2

INSTRUMENTATION AND TEST SYSTEMS

CHAPTER 25.4 LOGIC AND PROTOCOL ANALYZERS LOGIC ANALYZERS PROTOCOL ANALYZERS

25.33 25.33 25.41

CHAPTER 25.5 OSCILLOSCOPES INTRODUCTION GENERAL OSCILLOSCOPE CONCEPTS THE ANALOG OSCILLOSCOPE THE DIGITAL OSCILLOSCOPE THE FUTURE OF OSCILLOSCOPES

25.53 25.53 25.53 25.56 25.59 25.65

CHAPTER 25.6 STANDARDS-BASED MODULAR INSTRUMENTS INTRODUCTION ELEMENTS OF MODULAR INSTRUMENTS THE SYSTEM BACKPLANE FORM FACTORS VMEBUS (VME STANDS FOR VERSAMODULE EUROCARD) VXI (VMEBUS EXTENSIONS FOR INSTRUMENTATION) PERSONAL COMPUTER PLUG-INS (PCPIS) COMPACTPCI

25.66 25.66 25.67 25.68 25.68 25.69 25.70 25.74 25.75

CHAPTER 25.7 EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS INTRODUCTION BENEFITS OF EMBEDDED COMPUTERS IN INSTRUMENTS INSTRUMENT HARDWARE PHYSICAL FORM OF THE EMBEDDED COMPUTER EMBEDDED COMPUTER SYSTEM SOFTWARE USER INTERFACES EXTERNAL INTERFACES SOFTWARE PROTOCOL STANDARDS USING EMBEDDED COMPUTERS

25.77 25.77 25.79 25.80 25.80 25.81 25.81 25.81 25.83 25.83

25.2 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.3

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.1

INSTRUMENTS FOR MEASURING FREQUENCY AND TIME* Rex Chappell

INTRODUCTION For measurements of frequency, time interval, phase, event counting, and many other related signal parameters, the ideal instrument to use is an electronic counter or its cousin, the frequency and time-interval analyzer. These instruments offer high precision and analysis for research and development applications, high throughput for manufacturing applications, and low cost and portability for service applications. Electronic counters come in a variety of forms (Fig. 25.1.1).

FREQUENCY COUNTERS The earliest electronic counters were used to count such things as atomic events. Before counters were invented, frequency measurement was accomplished with a frequency meter, a tuned device with low accuracy. Frequency counters were among the first instruments to digitally measure a signal parameter, yielding a more precise measurement. Today, they are still among the most precise instruments. Characterizing transmitters and receivers is the most common application for a frequency counter. The transmitter’s frequency must be verified and calibrated to comply with government regulations. The frequency counter can measure the output frequency as well as key internal points, such as the local oscillator, to be sure that the radio meets specifications. Other applications for the frequency counter can be found in the computer world where high-performance clocks are used in data communications, microprocessors, and displays. Lower-performance applications include measuring electromechanical events and switching power-supply frequencies.

UNIVERSAL COUNTERS Electronic counters can offer more than just a frequency measurement. When it offers a few simple additional functions, such as period (the reciprocal of frequency), the instrument is sometimes known as a multifunction

*Adapted

from Chapter 19, “Electronic Instrument Handbook,” 3rd ed., Clyde Coombs Jr. (ed.), McGraw-Hill, 1999.

25.3 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.4

INSTRUMENTS FOR MEASURING FREQUENCY AND TIME* 25.4

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.1.1 Frequency counters have become the most commonly employed instrument for measuring frequencies. The counter shown here is the Agilent Technologies 53131A with 10-digit resolution, 20-mV sensitivity, and frequency coverage up to 225 MHz.

counter. When two-channel functions, such as time interval, are provided, the instrument is usually called a universal counter. This name reflects this instrument’s wide variety of applications. Several measurement functions provided by universal counters are shown in Fig. 25.1.2. Time interval measures the elapsed time between a start signal and a stop signal. The start signal is usually fed into one channel (A), while the stop signal is fed into a second channel (B). The function is often called time interval A to B. The resolution of the measurement is usually 100 ns or better, depending on the time interpolators employed. Applications range from the measurement of propagation delays in logic circuits to the measurement of the speed of golf balls. Variations of time interval that are particularly useful for digital circuit measurements are pulse width, rise time, and fall time. And, if the trigger points are known for rise and fall time, a calculation of the slew rate (volts-per-second) can be displayed. In all of these measurements the trigger levels are set automatically by the counter.

FIGURE 25.1.2 Several measurements that a universal counter can make. The A and B refer to the channels of the arriving signals. Frequency counters often have just one input channel, but universal counters need two for performing timing and signal comparison measurements.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.5

INSTRUMENTS FOR MEASURING FREQUENCY AND TIME* INSTRUMENTS FOR MEASURING FREQUENCY AND TIME

25.5

Time interval average is a function that yields more resolution and can be used to filter out jitter in signals. Totalize is the simple counting of events. It is useful for counting electronic or physical events, or where a digitized answer is needed for an automated test application. The counter accumulates and displays event counts while the gate is open. In some instruments, this function is called count. Frequency ratio A/B compares the ratio between two frequencies. It can be used to test the performance of a frequency doubler or a prescaler (frequency divider). Phase A relative to B compares the phase delay between two signals with similar frequencies. The answer is usually displayed in degrees. Some instruments display only positive phase, but it is more convenient to allow a display range of ±180°, or 0° to 360°.

CW MICROWAVE COUNTERS Continuous wave (CW) microwave counters are employed for frequency measurements in the microwave bands. Applications include calibrating the local oscillator and the transmitting frequencies in a microwave communication link. These counters can measure 20 GHz and higher. They offer relatively good resolution, down to 1 Hz, in a short measurement time. This makes them popular for manufacturing test applications. Their low cost and high accuracy, relative to a spectrum analyzer, make them popular for service applications. Some microwave counters also include the ability to measure power. This is a typical measurement parameter in service applications.

FREQUENCY AND TIME-INTERVAL ANALYZERS Modern applications, particularly those driven by digital architectures, often have more complex measurement requirements than those satisfied by a counter. These measurements are often of the same basic parameters of frequency, time, and phase—but with the added element of being variable over time. For example, digital radios often modulate either frequency or phase. In frequency shift keying (FSK), one frequency represents a logic “0” and another frequency represents a “1.” It is sometimes necessary to view frequency shifts in a frequency-versus-time graph. A class of instruments called frequency and time-interval analyzers (FTIA) can be used for this measurement. An FTIA provides a display that is similar to the output of a frequency discriminator displayed on an oscilloscope, but with much greater precision and accuracy (Fig. 25.1.3). This type of instrument is also referred to as a modulation domain analyzer. Other applications to which an FTIA can be applied include voltage-controlled oscillator (VCO) analysis, phase lock loop (PLL) analysis, and the tracking of frequency agile or hopping systems. A variation of the frequency and time-interval analyzer simply measures time. These are called time-interval analyzers (TIA), and they tend to be focused on data and clock jitter applications. These are generally a subset of an FTIA, although some may offer special arming and analysis features tuned to particular applications, such as testing the read/write channel timing in hard-disk drives.

SPECIFICATIONS AND SIGNIFICANCE Product data sheets and manuals often provide fairly detailed specifications. This section covers how to interpret some of the more important specifications.

Universal Counter Specifications Many of the specifications found in a universal counter data sheet will also be found in other types of counters and analyzers.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.6

INSTRUMENTS FOR MEASURING FREQUENCY AND TIME* 25.6

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.1.3 The display of a frequency-shift-keyed digital radio signal on a frequency and time-interval analyzer. Although it resembles an oscilloscope display, it is actually showing frequency vs. time, not voltage vs. time. The two frequencies, representing 0 and 1, are about 33 kHz apart. Each bit is about 15 µs long.

Sensitivity and Bandwidth. Sensitivity refers to how small a signal the instrument can measure. It is usually given in mV rms and peak-to-peak (p-p) values. In RF and microwave counters, it is given in dBm. Because of the element of noise and the broadband nature of a counter’s front end, the sensitivity is rarely better than 10 mV rms or 30 mV p-p. For frequency measurements, the sensitivity can be important if measuring a signal off the air with an antenna. Under most situations, however, there is sufficient signal strength to measure, and sensitivity is not an issue. The counter’s front-end bandwidth is not always the same as the counter’s ability to measure frequency. If the bandwidth is lower than the top frequency range, the sensitivity at the higher frequencies will be reduced. This is not necessarily a problem if there is enough signal strength at the frequencies being measured. For timing measurements, however, bandwidth can affect accuracy. An input channel with a low front-end bandwidth can restrict the minimum pulse width of the signal it can measure, and, for very precise measurements, it can distort the signal. So, if precise timing measurements are being made, it is best to get a counter with as high a bandwidth as possible. Resolution. Counters are commonly used because they can provide reasonably good resolution in a quick, affordable way. This makes resolution an important specification to understand. As will be seen, frequency measurement resolution is one of the specifications most dependent on the instrument’s architecture. The older basic counter architecture is still used in very low-cost counters and in some DVMs, oscilloscopes, and spectrum analyzers that include a frequency or time function. This architecture has the advantages of a simple low-cost design and rapid measurement, but has lower resolution and reduced measurement flexibility. For example, frequency resolution is limited, particularly for low frequencies, where resolution greater than 1 Hz is desired (assuming measurement times greater than 1 s are impractical). The basic counter was invented well before digital logic was able to economically perform an arithmetic division. Division is needed because a general way of calculating frequency is to divide the number of periods (called events) counted by the time it took to count them (events/time = frequency). The basic counter accomplishes the division by restricting the denominator (time) to decade values. After it became practical to perform a full division, the instrument no longer had to be restricted to decade gate times. More importantly, the time base could be synchronized to the input signal and not the other way around. This means that the resolution of the measurement

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.7

INSTRUMENTS FOR MEASURING FREQUENCY AND TIME* INSTRUMENTS FOR MEASURING FREQUENCY AND TIME

25.7

can be tied to the time base (usually time-interpolated) and not to the input frequency. This particularly improves the measurement of low frequencies (see Fig. 25.1.4). This class of instrument is called a reciprocal counter.

Microwave Counter Specifications A few specifications are unique to microwave counters, and some of the most common are covered here. Acquisition Time. Because of the way that most microwave counters operate, using a harmonic sampler technique or a YIG preselector, a time lapse is required to determine where in the spectrum the signal is. This is called acquisition time. This is usually in the 50 to 250 ms range. YIG-based counters generally exhibit longer acquisition times. If there is a general idea of where the signal already is, some counters work in a manual acquisition mode. The counter then skips the acquisition phase of the measurement and makes the measurement after properly positioning the LO to the manually specified area, taking less time (< 20 ms). FM Tolerance. This is largely determined by the IF bandwidth of the counter or the bandwidth of the preselector. If the input signal’s FM deviation is such that the signal occasionally falls outside of these bandwidths, the measurement will be in error. FIGURE 25.1.4 Frequency resolution of a reAlso, because the IF or preselector is seldom exactly centered on ciprocal counter with 1-ns interpolators vs. a the signal, a margin is accounted for that narrows the FM tolerbasic counter, given a 1-s gate time. The reciprocal counter has the advantage of a constant numance of the instrument. ber of digits of display, no matter what the input Counters that use prescaler technology do not have a problem frequency. with FM tolerance—or with acquisition time—for that matter. The trade-off is that some loss of resolution compared with a down-converter-type counter is realized, but this may not be a problem if advanced interpolators are used. Prescaling to 12.4 GHz is available in some counters today. Power. Some microwave counters have the ability to measure the signal’s power. This is useful for checking to see if a transmitter is within specifications or for finding faults in the cabling or connectors. Two main methods are used—true rms and detectors. The former produces much better accuracy, down to 0.1 dB, and the latter is less expensive, but has an accuracy of around 1.5 dB.

Frequency and Time Interval Analyzer Specifications Although many specifications are similar to those found in a counter, two are unique and need some explanation. Sample Rate. Also called measurement rate, this is how rapidly the instrument can make measurements. The speed needed depends on the phenomenon being measured. For example, to measure the 5-MHz jitter bandwidth of a clock circuit, the instrument must sample at 10 MHz (or faster). Sample rates in the 1-MHz range are good for moderate data-rate communications, VCO testing, and electromechanical measurements. Sample rates of 10 MHz (and higher) are better for high data-rate communications, data storage, and radar applications. Memory Depth. Memory depth for storing the digitized data is the other important instrument parameter. Some applications only need a short amount of memory to take a look at transient signals, such as the step response of a VCO. Other applications need a very deep memory, where probably the most extreme is the surveillance of unknown signals.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.8

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.2

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE Scott Stever

INTRODUCTION Voltage, and current, both ac and dc, and resistance are quantities commonly measured by electronic instruments. In the simplest case, each measurement type is performed by an individual instrument—a voltmeter measures voltage, an ammeter measures current, and an ohmmeter measures resistance. These instruments have many elements in common. The more classical, electromechanical meters are the easiest-to-use instrument for performing these measurements. A multimeter combines these instruments, and sometimes others, together into a single, general-purpose multifunction instrument.

Categories of Meters There are two primary types of meters—general purpose and specialty. General-purpose meters measure several types of electrical parameters such as voltage, resistance, and current. A digital multimeter (DMM) is an example of a general-purpose meter (Fig. 25.2.1). Specialty meters are generally optimized to measure a single parameter very well, emphasizing either measurement accuracy, bandwidth, or sensitivity. Listed in Table 25.2.1 are various types of meters and their measuring capabilities. The general-purpose multimeter is a flexible cost-effective solution. It is most suitable for most common measurements. Although DMMs can achieve performance rivaling the range and sensitivity of specialty meters while delivering superior flexibility and value, the presence of many display digits on a digital meter does not automatically mean that the meter has high accuracy. Meters often display significantly more digits of resolution than their accuracy specifications support. This can be very misleading to the uninformed user.

General Instrument Block Diagram The function of a meter is to convert an analog signal into a human- or machine-readable equivalent. Analog signals may be quantities such as a voltage or current, ac or dc, or a resistance. Shown in Fig. 25.2.2 is a block diagram of a typical signal-conditioning process used with meters.

25.8 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.9

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE

25.9

FIGURE 25.2.1 Multimeters are obtainable with varying degrees of resolutions and accuracies. Shown here are the Agilent Technologies 34401A, 34420A, and 3458A Digital Multimeters. They provide resolutions of 61/2, 71/2, and 81/2 digits, respectively, along with corresponding increases in accuracy on both their voltage and resistance measurement ranges.

Signal Conditioning, Ranging, and Amplification The input signal must first pass through some type of signal conditioner that typically comprises switching, ranging, and amplification circuits, as shown in Fig. 25.2.2. If the input signal to be measured is a dc voltage, the signal conditioner may be composed of an attenuator for the higher voltage ranges and a dc amplifier for the lower ranges, whereas if the signal is an ac voltage, a converter is employed to change the ac signal to its equivalent dc value. Resistance measurements are performed by supplying a known dc current to an unknown resistance, thereby converting the unknown resistance value to an easily measurable dc voltage. In nearly all cases, the input signal switching and ranging circuits, along with the amplifier circuits, convert the unknown quantity to a dc voltage that falls within the measuring range of the analog-to-digital converter (ADC).

TABLE 25.2.1 Meters—Types and Features

Type of meter

Multifunction

Measuring range

General Purpose Handheld DMM

Y

Bench DMM

Y

System DMM

Y

10 µV–1000 V; 1 nA–10 A; 10 mΩ–50 MΩ 10 µV–1000 V; 1 nA–10 A; 10 mΩ–50 MΩ 10 nV–1000 V; 1 pA–1 A; 10 µΩ–1 GΩ

Specialty ac Voltmeter Nanovoltmeter Picoammeter Electrometer Microohmmeter High resistance

N N N pA, high Ω N N

100 µV–300 V 1 nV–100 V 10 fA–10 mA 1 Ω–100 MΩ >10 TΩ

Frequency range

Speed, max readings/second

Best accuracy

Digits

20 Hz–20 kHz

2

0.1%

31/2–41/2

20 Hz–100 kHz

10

0.01%

31/2–41/2

1 Hz–10 MHz

50–100,000

0.0001%

41/2–81/2

1–10 1–100 1–100

0.1% 0.005% 0.05%

31/2–41/2 31/2–71/2 31/2–51/2

1–100 1–10

0.05% 0.05%

31/2–41/2 31/2–41/2

20 Hz–20 MHz

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:19 PM

Page 25.10

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE 25.10

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.2.2 Generalized block diagram of most modern meters.

Analog-to-Digital Conversion The role of the ADC is to transform a prescaled dc voltage into digits. For example, the ADC for a 61/2 digit resolution (21-bit) instrument is capable of producing over 2.4-million unique reading values. You can think of this as a bar chart with 2.4 million vertical bars with each bar increasing in size from the previous bar by an identical amount. Converting the essentially infinite resolution of the analog input signal to a single bar in our chart is the sole function of the ADC. However, the continuum of analog input values is partitioned— quantized—into 2.4-million discrete values in our example. The ADC used in a meter governs some of its most basic characteristics. These include its measurement resolution, its speed, and in some cases, its ability to reject spurious noise. The various methods used for analog-to-digital conversion can be divided into two groups—integrating and nonintegrating. Integrating techniques measure the average input value over a relatively long interval, while nonintegrating techniques sample the instantaneous value of the input—plus noise—during a very short interval. ADCs are designed strictly for dc voltage inputs. They are single-range devices with some exhibiting a 3-V full-scale input, while others have a 12-V full-scale input. For this reason, the input switching and ranging circuits must attenuate higher voltages and amplify lower voltages to enable the meter to provide a selection of ranges.

Managing the Flow of Information The first three blocks in Fig. 25.2.2 combine to produce a meter’s overall analog performance characteristics— measuring functions, ranges, sensitivity, reading resolution, and reading speed. The two microprocessor blocks manage the flow of information within the instrument, ensuring that the various subsystems are properly configured and that internal computations are performed in a systematic and repeatable manner. Convenience features such as automatic range selection are managed by these microprocessors. Electrical isolation is provided between the earth-referenced outside world and sensitive measuring circuits. The earth-referenced microprocessor also acts as a communications interpreter, receiving keyboard and programming instructions and managing the outward flow of data to the display, or the IEEE-488 computer interface.

DC Voltage Measurement Techniques Signal conditioning and analog-to-digital conversion circuits have the greatest influence on the characteristics of a dc meter. The ADC measures over only a single range of dc voltage and it usually exhibits a relatively low input resistance. To configure a useful dc meter, a front end is required to condition the input before

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.11

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE

25.11

the analog-to-digital conversion. Signal conditioning increases the input resistance, amplifies small signals, and attenuates large signals to produce a selection of measuring ranges.

Signal Conditioning for DC Measurements Input signal conditioning for dc voltage measurements includes both amplification and attenuation. Shown in Fig. 25.2.3 is a typical configuration for a dc input switching and ranging section of a meter. The input signal is applied directly to the amplifier input through switches K1 and S1 for lower voltage inputs—generally those less than 12 V dc. For higher voltages, the input signal is connected through relay K2 to a precision 100:1 divider network formed by resistors R4 and R5. The low voltage output of the divider is switched to the amplifier input through switch S2. The gain of amplifier A1 is set to scale the input voltage to the full-scale range of the ADC, generally 0 ±12 V dc. If the nominal full-scale input to the ADC is 10 V dc, the dc input attenuator and amplifier would be configured to amplify the 100-mV range by 100 times and to amplify the 1-V range by 10 times. The input amplifier would be configured for unity gain, Xl, for the 10-V measuring range. For the upper ranges, the input voltage is first divided by 100, and then gain is applied to scale the input back to 10 V for the ADC—inside the meter, for instance, 100 V dc is reduced to 1 V dc for the amplifier, whereas 1000 V dc is divided down to become 10 V dc. For the lower voltage measuring ranges, the meter’s input resistance is essentially that of amplifier A1. The input amplifier usually employs a low bias current—typically less than 50 pA. It is often an FET input stage providing an input resistance greater than 10 GΩ. The meter’s input resistance is determined by the total resistance of the 100:1 divider for the upper voltage ranges. Most meters provide a 10-MΩ input resistance for these ranges.

FIGURE 25.2.3 Simplified schematic of the input switching, measuring range selection, and amplifier for a dc voltage meter.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.12

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE 25.12

INSTRUMENTATION AND TEST SYSTEMS

Amplifier Offset Elimination (Autozero) The main performance limitation of the dc signal-conditioning section is usually its offset voltage. This affects the meter’s ability to read zero volts with a short applied. Most meters employ some method for automatically zeroing out amplifier offsets. Switch S3 in Fig. 25.2.3 is used to periodically short the amplifier input to ground to measure the amplifier offset voltage. The measured offset is stored and then subtracted from the input signal measurement to remove amplifier offset errors. Switches S1 and S2 are opened simultaneously during the offset measurement to avoid shorting the meter’s input terminals together. In a multifunction instrument, all measurements are eventually converted into a dc voltage, which is measured by an ADC. Other dc signals are often routed to the ADC through a dc voltage measuring front end. Switch S4 in Fig. 25.2.3 could be used to measure the dc output of an ac voltage function or a dc current measuring section.

AC VOLTAGE MEASUREMENT TECHNIQUES The main purpose of an ac front end is to change an incoming ac voltage into a dc voltage that can be measured by the meter’s ADC. The type of ac voltage to dc voltage converter employed in a meter is quite critical. There are vast differences in behavior between rms, average-responding, and peak-responding converters.

Signal Conditioning for AC Measurements The input signal conditioning for ac voltage measurements includes both attenuation and amplification, similar to the dc voltage front end already discussed. Shown in Fig. 25.2.4 are typical input switching and ranging circuits for an ac-voltage instrument. Input-coupling-capacitor C1 blocks the dc portion of the input signal so

FIGURE 25.2.4 The input switching and ranging sections of a typical ac voltage measurement section—simplified schematic.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.13

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE

25.13

that only the ac component is measured by the meter. Ranging is accomplished by combining signal attenuation from first-stage amplifier A1 and gain from second-stage amplifier A2. The first stage implements a high input impedance—typically 1-MΩ switchable compensated attenuator. The value of capacitor C3 is adjusted so that the R2C3 time constant precisely matches the R1C2 time constant, yielding a compensated attenuator in which the division ratio does not vary with frequency. Switch S1 is used to select greater attenuation for the higher input voltage ranges. The second stage provides variable-gain, widebandwidth signal amplification to scale the input to the ac converter to the full-scale level. The output of the second stage is connected to the ac converter circuit. Residual dc offset from the attenuator and amplifier stages is blocked by capacitor C5. An ac-voltage front end, similar to the one discussed above, is also used in ac-current-measuring instruments. Shunt resistors are used to convert the ac current into a measurable ac voltage. Current shunts are switched to provide selectable ac current ranges. Amplifier bandwidth and ac converter limitations are the main differences between various ac front ends. As mentioned earlier, the type of ac-to-dc converter circuit has a profound effect on overall measurement accuracy and repeatability. True rms converters are superior to both average responding and peak-responding ac converters in almost every application.

CURRENT MEASUREMENT TECHNIQUES An ammeter measures the current flowing through it, while approximating a short circuit between its terminals. A conventional ammeter is connected in series with the circuit or device being measured so that current flows through both the meter and the test circuit. There are two basic techniques for making current measurements: in-circuit methods and magnetic field sensing methods.

In-Circuit Methods In-circuit current sensing meters employ either a current shunt or virtual ground amplifier technique, similar to those shown in Fig. 25.2.5a and b. Shunt-type meters are very simple. A resistor RS, shown in Fig. 25.2.5a, is connected across the input terminals so that a voltage drop proportional to the input current is developed. The value of RS is kept as low as possible to minimize the instrument’s burden voltage, or IR drop. This voltage drop is sensed by an internal voltmeter and scaled to the proper current value. Virtual-ground-type meters are generally better suited for measuring smaller current values—usually 100 mA to below 1 pA. These meters rely on low-noise, low-bias-current operational amplifiers to convert the input current to a measurable voltage, as illustrated in Fig. 25.2.5b. Negligible input current flows into the negative input terminal of the amplifier. Therefore the input current is forced to flow through the amplifier’s feedback resistor Rf , causing the amplifier output voltage to vary by IRf . The meter burden voltage, which is the voltage drop from input to LO, is maintained near 0 V by the high-gain amplifier forming a virtual ground. Since the amplifier must source or sink the input current, the virtual ground technique is generally limited to low current measurements.

Magnetic Field Sensing Methods Current measurements employing magnetic field sensing techniques are extremely convenient. The reason is that measurements can be performed without interrupting the circuit or producing significant loading errors. Since there is no direct contact with the circuit being measured, complete dc isolation is also ensured. These meters use a transducer—usually a current transformer or solid-state Hall-effect sensor—to convert the magnetic field surrounding a current-carrying conductor into an ac or dc value that is proportional. Sensitivity can be very good, since simply placing several loops of the current-carrying conductor through the probe aperture will increase the measured signal level by the same factor as the number of turns.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.14

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE 25.14

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.2.5 Two common methods for in-circuit current measurements. (a) Shunt resistor Rs is connected across the input terminals, developing a voltage proportional to the input current. (b) The input current is forced to flow through Rf , while the meter burden voltage is limited to the drop across the fuse and the amplifier offset voltage.

Alternating Current Alternating current measurements are very similar to dc measurements. However, ac measurements employ shunt-type current-to-voltage converters almost exclusively. The output of the current-to-voltage sensor is measured by an ac voltmeter. Signal and ac converter issues discussed in “AC Voltage Measurement Techniques” are relevant to ac measurements as well. The input terminals of in-circuit ac meters are always direct coupled—ac + dc coupled—to the shunt so that the meter maintains dc continuity in the test circuit. The meter’s internal ac voltmeter section can be either ac coupled or ac + dc coupled to the current-to-voltage converter.

RESISTANCE MEASUREMENT TECHNIQUES An ohmmeter measures the dc resistance of a device or circuit connected to its input. Resistance measurements are performed by supplying a known dc to an unknown resistance, thereby converting the resistance value to an easily measured dc voltage. Most meters use an ohms converter technique similar to the current source or voltage ratio types shown in Fig. 25.2.6.

Signal Conditioning for Resistance Measurements The current-source method shown in Fig. 25.2.6a employs a known current source value I that flows through the unknown resistor when it is connected to the meter’s input. This produces a dc voltage proportional to the unknown resistor value: by Ohm’s law, E = IR. Thus dc voltmeter input-ranging and signal-conditioning circuits

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.15

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE

25.15

FIGURE 25.2.6 Ohms converter circuits used in meters. (a) The currentsource ohms converter employs a constant current source, forcing current I through unknown resistance R, developing a voltage to be measured by a dc voltage front end. (b) The voltage-ratio type ohms converter calculates the unknown resistor value R from dc voltage measurements in a voltage divider circuit.

are used to measure the voltage developed across the resistor. The result is scaled to read directly in ohms. Shown in Fig. 25.2.6b is the voltage-ratio-type ohms converter technique. This method uses a known voltage source Vref and a known range resistor Rrange to compute the unknown resistor value. The range resistor and the unknown resistor form a simple voltage divider circuit. The meter measures the dc voltage developed across the unknown resistor. This voltage, along with the values of the internal voltage source and range resistor, is used to calculate the unknown resistor value. In practice, meters have a variety of resistance-measuring ranges. To achieve this, the ohms test current—or range resistor—is varied to scale the resulting dc voltage to a convenient internal level, usually between 0.1 and 10 V dc. This measurement is relatively insensitive to lead resistances Rl in the high-impedance input of the dc voltmeter. Voltage drops in the current source leads do not affect the voltmeter measurement. However, they can affect the accuracy of the current source itself.

Two-Wire Sensing The ohms, converters discussed above use two-wire sensing. When the same meter terminals are used to measure the voltage dropped across the unknown resistor as are used to supply the test current, a meter is said to use a two-wire ohms technique. With two-wire sensing, the lead resistances Rl shown in Fig. 25.2.7a, are indistinguishable from the unknown resistor value, causing potentially large measurement errors for lower-value resistance measurements.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.16

INSTRUMENTS FOR MEASURING CURRENT, VOLTAGE, AND RESISTANCE 25.16

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.2.7 (a) Simplified schematic for two-wire sensing. Lead resistances Rl are inseparable from the unknown resistance measurement. (b) Simplified schematic for four-wire sensing.

The two-wire technique is widely used by all types of ohmmeters because of its simplicity. Often, meters provide a relative or null math function to allow lead resistances to be measured and subtracted from voltage, current, and resistance-measuring instruments’ subsequent resistance measurements. This works well unless the lead resistances vary due to temperature or connection integrity. The four-wire sensing technique, or Kelvin sensing, is designed to eliminate lead resistance errors.

Four-Wire Sensing The four-wire sensed-resistance technique is the most accurate way to measure small resistances. Lead resistances and contact resistances are virtually eliminated using this technique. A four-wire converter senses the voltage drop across only the unknown resistor. The voltage drops across the lead resistances are excluded from measurement, as shown in Fig. 25.2.7b. The four-wire converter employs two separate pairs of connections to the unknown resistor. One connection pair, often referred to as the source leads, supplies the test current that flows through the unknown resistor, similar to the two-wire measurement case. Voltage drops are still developed across the source lead resistances. A second connection pair, referred to as the sense leads, connects directly across the unknown resistor. These leads connect to the input of a dc voltmeter. The dc voltmeter section is designed to exhibit an extremely large input resistance so that virtually no current flows in the sense input leads. This enables the meter to measure only the voltage drop across the unknown resistor. This scheme thereby removes from the measured voltage, the drops in both the source leads and the sense leads. Generally, lead resistances are limited by the meter manufacturer. There are two main reasons for this. First, the total voltage drop in the source leads will be limited by the design of the meter and is usually limited to a fraction of the meter measuring range being used. Second, the sense lead resistances will introduce additional measurement noise if they are allowed to become too large. Sense leads less than 1 kΩ usually will contribute negligible additional error. The four-wire technique is widely used in situations where lead resistances can become quite large and variable. It is used almost exclusively for measuring lower resistor values in any application, especially for measuring values of 10 Ω or less. It is also used in automated test applications where cable lengths can be quite long and numerous connections or switches may exist between the meter and the device under test. In a multichannel system, the four-wire method has the obvious disadvantage of requiring twice as many switches and twice as many wires as the two-wire technique.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.17

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.3

SIGNAL SOURCES Tomas Fetter

INTRODUCTION The simplest useful definition for a signal is an electrical voltage (or current) that varies with time. To characterize a signal, an intuitive yet accurate concept is to define the signal’s waveform. A waveform is easy to visualize by imagining the picture a pen, moving up and down in proportion to the signal voltage, would draw on a strip of paper being steadily pulled at right angles to the pen’s movement. Shown in Fig. 25.3.1 is a typical periodic waveform and its dimensions. A signal source, or signal generator, is an electronic instrument that generates a signal according to the user’s commands with regard to its waveform (Fig. 25.3.2). Signal sources serve the frequent need in engineering and scientific work for energizing a circuit or system with a signal whose characteristics are known.

KINDS OF SIGNAL WAVEFORMS Most signals fall into one of two broad categories—periodic and nonperiodic. Signal source instruments generate one or the other, and sometimes both. A periodic signal has a waveshape, which is repetitive: the pen, after drawing one period of the signal waveform, is in the same vertical position where it began, and then it repeats exactly the same drawing. A sine wave is the best-known periodic signal. By contrast, a nonperiodic signal has a nonrepetitive waveform. The best-known nonperiodic signal is random noise. The familiar sinusoid, illustrated in Fig. 25.3.3, is the workhorse signal of electricity. The simple mathematical representation of a sine wave can be examined to determine the properties that characterize it: s(t) = A sin(2πft) where s = represents the signal, a function of time t = time, s A = peak amplitude of the signal, V or A f = signal frequency, cycles/second (Hz)

HOW PERIODIC SIGNALS ARE GENERATED Introduction Periodic signals are generated by oscillators. Some signal generators use the waveform produced directly by an oscillator. However, many signal generators combine a number of different techniques to increase the 25.17 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.18

SIGNAL SOURCES 25.18

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.3.1 Waveform of an active typical period signal.

generator’s performance and capabilities. Key performance attributes include frequency range, frequency accuracy, amplitude accuracy, frequency switching speed, types of waveforms produced, and various modes of signal modulation.

Oscillators Electronic oscillators are the basic building blocks of signal generators. Any oscillator circuit fits into one of these broad categories:

• AC amplifier with filtered feedback • Threshold decision circuit

FIGURE 25.3.2 Today’s signal generators must meet demanding specifications, including accuracy, spectral purity, and low phase noise. This Agilent Technologies 4422B Signal Generator covers 225 kHz to 4 GHz and can be tailored to meet changing requirements as technologies evolve.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.19

SIGNAL SOURCES SIGNAL SOURCES

25.19

FIGURE 25.3.3 A typical sine wave.

Feedback Oscillators The feedback technique is historically the original, and still the most common form of oscillator circuit. Shown in Fig. 25.3.4 are the bare essentials needed for the feedback oscillator. The output from the amplifier is applied to a frequency-sensitive filter network. The output of the network is then connected to the input of the amplifier. Under certain conditions, the amplifier output signal, passing through the filter network, emerges as a signal, which, if supplied to the amplifier input produces the output signal. Because of the feedback connection, the circuit is capable of sustaining a particular output signal indefinitely. This is, in fact, an oscillator. The circuit combination of the amplifier and the filter is called a feedback loop. To understand how the combination can oscillate, mentally break open the loop at the input to the amplifier; this is called the open-loop condition. The open loop begins at the amplifier input and ends at the filter output. Here are the particular criteria that the open loop must satisfy in order that the closed loop will generate a sustained signal at some frequency f0: 1. The power gain through the open loop (amplifier power gain times filter power loss) must be unity at f0. 2. The total open-loop phase shift at f0 must be 0° (or 360°, 720°, …).

FIGURE 25.3.4 Oscillator, using an amplifier and a filter to form a feedback loop.

Both criteria are formal statements of what was said previously: the loop must produce just the signal at the input of the amplifier to maintain the amplifier output. Criterion 1 specifies the amplitude and criterion 2 the phase of the requisite signal at the input. When the above criteria are met, the poles of the oscillator are on the jw, or imaginary, axis when plotted on the complex s-plane, which corresponded to a stable amplitude oscillation. The loop gain actually must be greater than unity for the oscillation to start and build to the final stable amplitude, which corresponds to poles being in the right half plane. Once the final amplitude is reached, something must reduce the loop gain to exactly 1. In a gain controlled oscillator, some form of level detection and a variable gain element are used as a part of the oscillator circuit feedback loop to precisely sustain the conditions of oscillation, keeping the poles on the imaginary axis. Selflimiting oscillators, such as the single transistor Colpitts

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.20

SIGNAL SOURCES 25.20

INSTRUMENTATION AND TEST SYSTEMS

oscillator, rely on the nonlinear large signal gain characteristics of the transistor to produce a stable amplitude oscillation. As the signal amplitude increases to the point the transistor goes into saturation or cutoff, the large signal gain decreases automatically to the level necessary to ensure a loop gain of 1.

Threshold Decision Oscillators Threshold decision oscillators generally consist of an integrator or capacitor, which is driven from a current source switched by level detectors. The integrator signal ramps up to a preset high level. The current source is then reversed, and the integrator integrates down to a preset low level, where the process is started over. The result is a triangle wave. The triangle wave can be filtered or shaped to form a sine wave, or run though a high gain-limiting amplifier to produce a square wave. Threshold decision oscillators are generally used at frequencies below RF.

The Colpitts oscillator The Colpitts oscillator (Fig. 25.3.5) and circuits derived from it that operate on the same basic principle are the most commonly used configurations in transistor oscillator design. The inductor L and the capacitors C, C1, and C2 form a parallel resonant circuit. The output voltage is fed back to the input in the proper phase to sustain oscillation via the voltage divider formed by C1 and C2, parts of which may be internal to the transistor itself. Typically bipolar silicon (Si) transistors are used up to 10 or 12 GHz and gallium arsenide (GaAs) FETs are usually selected for coverage above this range, though bipolar Si devices have been used successfully to 20 GHz. Bipolar Si devices generally have been favored for lower phase noise, but advances in GaAs FET design have narrowed the gap, and their superior frequency coverage has made them the primary choice for many designs.

Electrically Tuned Oscillators The use of an electrically tuned capacitor, such as CR in Fig. 25.3.6, enables the frequency of oscillation to be varied and to be phase-locked to a stable reference. A common technique for obtaining a voltage-variable capacitor is to employ a variable-capacitance diode, or varactor. This device consists of a reverse-biased junction diode with a structure optimized to provide a large range of depletion-layer thickness variation with voltage as well as low losses (resistance) to ensure high Q. Major advantages of varactor oscillators are the potential for obtaining high tuning speed and the fact that a reverse-biased varactor diode does not dissipate dc power, as does a magnetically biased oscillator. Typical tuning rates in the microsecond range are realized without great difficulty.

FIGURE 25.3.5 Colpitts oscillator circuit.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.21

SIGNAL SOURCES SIGNAL SOURCES

25.21

FIGURE 25.3.6 Negative-resistance oscillator. CR, LR, and RR represent a resonator. The inductance L is required to achieve a negative resistance in Zin.

YIG-Tuned Oscillators High Q resonant circuits suitable for tuning oscillators over very broad frequency ranges can be realized with polished, single-crystal spheres of yttrium-iron-garnet (YIG). When placed in a dc magnetic field, ferromagnetic resonance is attained at a frequency that is a linear function of the field. The microwave signal is usually coupled into the sphere (typically about 0.5 mm in diameter) via a loop, as shown in Fig. 25.3.7. The equivalent circuit presented to the transistor is a shunt-resonant tank that can be tuned linearly over several octaves in the microwave range. Various rare earth dopings of the YIG material have been added to extend performance to lower frequency ranges in terms of spurious resonances (other modes) and nonlinearities at high power, but most ultra-wideband oscillators have been built above 2 GHz. Frequencies as high as 40 GHz have been achieved using pure YIG, and other materials, such as hexagonal ferrites, have been used to extend frequencies well into the millimeter range.

Frequency Multiplication Another approach to signal generation involves the use of frequency multiplication to extend lower-frequency sources up into the microwave range. By driving a nonlinear device at sufficient power levels, harmonics of

FIGURE 25.3.7 YIG-tuned oscillator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.22

SIGNAL SOURCES 25.22

INSTRUMENTATION AND TEST SYSTEMS

the fundamental are generated that can be selectively filtered to provide a lower-cost, less complex alternative to a microwave oscillator. The nonlinear device can be a diode driven through its nonlinear i versus v characteristic, or it can be a varactor diode with a nonlinear capacitance versus voltage. Another type of device consists of a pin structure (p-type and n-type semiconductor materials separated by an intrinsic layer) in which charge is stored during forward conduction as minority carriers. On application of the drive signal in the reverse direction, conductivity remains high until all the charge is suddenly depleted, at which point the current drops to zero in a very short interval. When this current is made to flow through a small drive inductance, a voltage impulse is generated once each drive cycle which is very rich in harmonics. Such step-recovery diodes are efficient as higher-order multipliers.

Crystal Oscillator A crystal oscillator can be made by using a quartz crystal resonator as the frequency selective element in a transistor amplifier-based oscillator. The equivalent circuit of a crystal is a series resonant LC circuit, so the oscillator must be designed around a series resonator. A quartz crystal is a very high Q resonator that delivers two major benefits. First, the accuracy and stability of the frequency of the resonance of the crystal makes the crystal oscillator a very good frequency reference for a signal generator. Second, the very high Q, or narrow bandwidth, of the resonator allows very little of the broadband noise of the transistor, and other components, to modulate the oscillator signal. Noise that is present on an oscillator and affects the phase stability is called phase noise. Low phase noise is important if a signal is to be modulated onto a carrier signal because the phase noise will limit the dynamic range of the modulated signal.

Frequency Synthesis The methods by which frequencies can be generated using addition, subtraction, multiplication, and division of frequencies derived from a single reference standard are called frequency synthesis techniques. The accuracy of each of the frequencies generated becomes equal to the accuracy of the reference. Normally, an oscillator that covers a wide range of frequencies cannot, by itself, be tuned to an absolute frequency accurately. And an oscillator that is very accurate in absolute frequency, such as a crystal oscillator, cannot be tuned over a wide range of frequencies, due to its high Q. Even if it could be tuned, the crystal oscillator would then lose its absolute frequency accuracy because some element in addition to the crystal would be determining the frequency of oscillation. However, if that wide ranging oscillator is locked to the accurate fixed frequency reference oscillator through synthesis techniques, then the absolute frequency accuracy of the reference oscillator can be extended to the full frequency range of the wide tuning oscillator. Three classifications are commonly referred to: indirect synthesis, direct synthesis, and direct digital synthesis (DDS).

Direct Synthesis By assembling a circuit assortment of frequency dividers and multipliers, mixers and bandpass filters, an output m/n times the reference can be generated. There are many possible ways to do this, and the configuration actually used is chosen primarily to avoid strong spurious signals, which are low-level, nonharmonically related sinusoids. The principal advantage of the direct synthesis technique is the speed with which the output frequency may be changed. Shown in Fig. 25.3.8 is one way to produce a 13-MHz output from a 10-MHz reference. The inputs to the mixer are 10 and 3 MHz, the mixer produces sum and difference frequency outputs, and the bandpass filter on the output selects the 13-MHz sum. Notice that another bandpass filter could have been used to select the 7-MHz difference, if that were wanted. A key advantage of direct synthesis is frequency switching speed. Because no control loops are directly involved in the signal generation that have relatively slow time constants, switching speeds in the microsecond range are easily achievable.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.23

SIGNAL SOURCES SIGNAL SOURCES

25.23

FIGURE 25.3.8 Direct frequency synthesis.

Indirect Synthesis This name derives from the use of an oscillator other than the reference to generate the output. This technique involves placing the oscillator in a phase-locked loop (PLL). The phase-locked loop starts with a tunable oscillator, or voltage-controlled oscillator (VCO). The output of the oscillator is then divided down in frequency by a frequency divider with a divide number m. The divided down signal is compared to a reference oscillator signal in a device called a phase detector. A phase detector produces a voltage output that is proportional to the difference in phase between the two input signals. The output voltage of the phase detector, or error signal, is then fed into an integrator/loop filter, which produces the voltage that is necessary to tune the tunable oscillator to minimize the error voltage, or phase difference between the reference oscillator and divided down tunable oscillator. If the phase difference between the two signals is minimized, then the two signals must also be at the same frequency. But the tunable oscillator is actually running at m times the reference oscillator due to the divider. When this occurs, the two oscillators are said to be phase-locked together. If the loop filter is a true integrator, then the steady-state frequency error must be zero. The finest frequency resolution of a phase-locked loop is called the step size of the synthesizer. For a simple phase-locked loop with a single divider in the feedback path between the tunable oscillator and the phase detector, output frequency is m times the frequency at the phase detector, and the step size is the frequency of the reference signal. For higher frequency resolution, or smaller step size, more sophisticated phase-locked loop topologies are required. Shown in Fig. 25.3.9 is a slightly more complicated phase-locked loop synthesizer in which the reference oscillator is also divided down by a divide number n. The tunable oscillator is then locked to m/n times the reference frequency. If the dividers are programmable, then a wide range of

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.24

SIGNAL SOURCES 25.24

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.3.9 Indirect frequency synthesis.

output frequencies that are m/n times the reference can be generated by this technique. As the divide number n is increased, the resolution of the phase-locked loop increases because the step size becomes smaller. As n is increased to reduce the step size of the loop, m must be increased to maintain the same range of output frequencies. A practical limitation to resolution is reached due to the noise performance of the phase-locked loop. Any additive noise generated in the phase detector sees a gain of m to the output of the tunable oscillator within the bandwidth of the phase-locked loop. So effectively, as the resolution of the loop increases, so does the phase noise. This limitation is overcome either by summing multiple synthesis loops together in a way to reduce the maximum divide number and thus the noise gain of each while preserving the resolution and frequency range, or by using sophisticated frequency interpolation techniques known as fractional frequency dividers, or fractional-n. When the tunable oscillator is divided by a divider that has both an integer and fractional part, the resolution can be extended without increasing the noise gain simply by increasing the number of digits of fractional resolution. Fractional-n dividers work by switching between two adjacent whole divide numbers at a rate that time averages to the correct fraction between those two numbers. The output frequency averages to exactly the correct frequency, but it also has a large frequency modulation due to the divider changing back and forth between two divide numbers. Several techniques can be used to remove the FM component and leave a spectrally pure output signal. A key advantage of indirect synthesis is the simplicity of the design required to produce a wide range of output frequencies that are locked to a fixed reference frequency by simply changing the divider numbers.

Arbitrary Waveform Synthesizers In this technique, the complete period of some desired waveshape is defined as a sequence of numbers representing sample values of the waveform, uniformly spaced in time. These numbers are stored in digital memory and then, paced by the reference, repetitively read out in order. The sequence of numbers is then converted into a sequence of voltage levels using a digital-to-analog converter (DAC). The output

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.25

SIGNAL SOURCES SIGNAL SOURCES

25.25

of the DAC is then filtered to produce an accurate replica of the numerically defined signal. Any arbitrary waveform can be produced with this technique with frequency components from DC up to half the sample frequency.

TYPES OF SIGNAL GENERATORS Introduction The various types of signal generators are categorized by frequency range and functional capability. Many of the historical boundaries between generator types have changed over the years as the technology for generating electrical signals has changed. For example, the digitally based arbitrary waveform generator has mostly replaced the traditional function generator for producing complex, sinusoidal, and nonsinusoidal baseband waveforms. In addition to generating a signal at a specific frequency, a signal generator accurately controls the amplitude of the signal, sometimes over a wide range of amplitudes through use of precision level control circuitry and precision step attenuators. Absolute amplitudes accurate to within several dB over the full frequency range of a frequency generator are possible.

Audio Oscillators At audio frequencies (∼1 Hz to 100 KHz), there is more emphasis on purity of waveform than at RF, so it is natural to use an oscillator circuit that generates a sine wave as its characteristic waveform. This is, in effect, an amplifier with filtered feedback. For audio frequencies, however, the elements of an LC resonant circuit for the filter become expensive and large, with iron or ferrite core construction necessary to realize the large values of inductance required. Another handicap is that ferrocore inductors exhibit some nonlinear characteristics which increase harmonic output when they are used in a resonant filter circuit. Using resistors and capacitors, it is possible to establish a voltage transfer function that resembles that of a resonant circuit in both its phase and amplitude response. The four-element RC network in the dashed portion of Fig. 25.3.10 is

FIGURE 25.3.10 Wien bridge oscillator.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.26

SIGNAL SOURCES 25.26

INSTRUMENTATION AND TEST SYSTEMS

characterized by an input-output voltage ratio V1/V0 (called a transfer function) that varies with frequency. This transfer function has an amplitude peak of (1/3) V0 at the resonant frequency (1/2)πRC and a phase shift passing through zero at this frequency. Very low distortion audio oscillators use some form of level detection and variable gain as a part of the oscillator circuit feedback loop to precisely sustain the conditions of oscillation, keeping the poles on the imaginary axis, without adding distortion. Self-limiting oscillators, such as the single transistor Colpitts, rely on the nonlinear large signal gain characteristics of the transistor to produce a stable amplitude oscillation at the expense of harmonic distortion performance. Because the gain of the active device in a self-limiting oscillator changes throughout the cycle of the sinusoidal oscillation to sustain an average loop gain of 1, the waveform is distorted, and harmonics of the frequency of oscillation are created. The effects of the distortion can be minimized by taking advantage of the filtering action of the resonator, but a highly linear gain-controlled oscillator can have much lower distortion.

Function Generators The term function generator describes a class of oscillator-based signal sources in which the emphasis is on versatility. Primarily, this means providing a choice of output waveforms. It also connotes continuous tuning over wide bands with max-min frequency ratios, sub-Hz to MHz frequencies, flat output amplitude, and sometimes modulation capabilities—frequency sweeping, frequency modulation (FM), and amplitude modulation (AM). With regard to their frequency accuracy and stability, function generators are inferior to sine-wave oscillators, but their performance is quite adequate for many applications. Some frequency synthesizers are also called precision function generators by their manufacturers. What this means is that circuitry has been added to the synthesizer to produce other waveforms in addition to sine waves. Early function generators were based on integrators driven from current sources switched by level detectors to form triangle wave generators, which is a form of threshold decision circuit. The triangle waves were then either put through high gain-limiting circuits to produce square waves, or shaped with diode-shaping circuits to produce sine waves. Because the signal was based on the RC time constant of an integrating amplifier, traditional function generators are not synthesized. Modern function generators are based on digital arbitrary waveform generators and are capable of generating an infinite variety of waveforms. And because arbitrary waveform generators are based on a single clock frequency, they are technically synthesizers. Function generators, and now mostly arbitrary waveform generators, are often included in RF and microwave signal generators that offer signal modulation capabilities. Because these function generators create the baseband signal that contains the information that is then modulated onto the RF or microwave carrier signal, these function generators are often called baseband generators.

Pulse Generators A pulse generator is an instrument that can provide a voltage or current output whose waveform may be described as a continuous pulse stream. A pulse stream is a signal that departs from an initial level to some other single level for a finite duration, then returns to the original level for a finite duration. This type of waveform is sometimes referred to as a rectangular pulse. Pulse Nomenclature. The common terms that describe an ideal pulse stream are identified in Fig. 25.3.11. However, in order to describe real pulses, some additional terms are needed. These terms are explained in Fig. 25.3.12. If a pulse generator provides more than just a single channel output, the multiple outputs do not necessarily have to be time synchronous. The majority of these instruments have the capability of delaying the channels with respect to each other. This delay may be entered as an absolute value (e.g., 1 ns) or as a phase shift (e.g., 90° = 25 percent of the period). Sometimes, especially in single-channel instruments, these values are referenced to the trigger output. The maximum delay or phase range is limited and in most cases dependent on the actual pulse period. All pulse generators offer variable levels (HIL, LOL, AMP, OFS), variable pulse period (PER, FREQ), and variable pulse width (PWID, NWID, DCYC). Some instruments also offer variable transition times

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.27

SIGNAL SOURCES SIGNAL SOURCES

25.27

FIGURE 25.3.11 Ideal pulse nomenclature. HIL = high level (voltage or current); LOL = low level (voltage or current); AMP = HIL − LOL = amplitude (voltage or current); OFS = (HIL + LOL)/2 = offset (voltage or current); A1 = LOL + X*AMP, A2 = HIL − X*AMP, where X is defined as 0.1 (normally) or as 0.2 (for ECL devices). PER = t7 − t2 = pulse period; FEWQ = 1/PER = pulse frequency; PWID = t5 − t2 = positive pulse width; NWID = t7 − t5 = negative pulse width; DCYC = PWID/PER = duty cycle; TRISE = t3 − t1 = rise time; TFALL = t6 − t4 = fall time; t1, t7 rising edge crosses A1; t2 = rising edge crosses A1; t4 = rising edge crosses OFS; t3 = rising edge crosses A2; t2 = falling edge crosses A2; t5 = falling edge crosses OFS; t6 = falling edge crosses A1. Remarks: ECL stands for emitter-coupled logic. Rise and fall times are sometimes also referred to as transition times or edge rates.

(TRISE, TFALL). The minimum and maximum values and the resolution with which these parameters can be varied are described in pulse-generator data sheets.

Special Pulse Generators This section describes pulse generators with special capabilities and distinguishes them from instruments called data generators. Programmable Data. Some pulse generators can generate not only repetitive pulse but also serial data streams. These instruments have a memory with a certain depth and an address counter in the signal path of

FIGURE 25.3.12 Real pulse generator nomenclature. NP = negative preshoot; PO = positive overshoot; PR = positive ringing; PP = positive preshoot; NO = negative overshoot; NR = negative ringing. Remark: Preshoot, overshoot, and ringing are sometimes also referred to as pulse distortions.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.28

SIGNAL SOURCES 25.28

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.3.13 Block diagram of a single-channel pulse generator with data capabilities. 1: Internal clock generation circuit; 2: start/stop generation circuit; 3: memory address bus (n lines); 4: width generation circuit (NRZ, RZ with variable duty cycle). Data stream length (selected by the user): m bit (m _< 2n); memory depth: 2n; reset address: 2 n−m. Remark: When the address counter value reaches 2n−1, the counter resets itself to the reset address (calculated and programmed by the microprocessor).

each channel, as shown in Fig. 25.3.13. Thus, single-shot or repetitive data streams of a programmable length can be generated. The maximum length of such a data stream is limited by the memory depth. The strobe output provided by these instruments can be used to generate a trigger signal that is synchronous to one specific bit of this data stream, which is useful when this bit—or the corresponding reaction of the Device Under Test (DUT)—has to be observed with an oscilloscope. There are several different data formats, as shown in Fig. 25.3.14. The most popular one is Non-Return-toZero modulation (NRZ), because the bandwidth of a data stream with that format and a data rate of, for instance, 100 Mbit/s is only 50 MHz (100 MHz if the format is RZ, R1, or RC). RC is used if the average duty

FIGURE 25.3.14 Different data formats. NRZ: nonreturn to zero (zero = logic “0” = −LOL); RZ: return to zero; R1: return to “1” (Logic “1” = HIL); RC: return to complement (“0” “1”; “1” “0”).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.29

SIGNAL SOURCES SIGNAL SOURCES

25.29

cycle (see Fig. 25.3.11) of the data stream has to be 50 percent, no matter what the data that are being transmitted look like. For instance, if the data link that has to be tested cannot transmit a dc signal, an offset (or threshold) of 0 V (or 0 A, respectively) and the data format RC has to be chosen. Sometimes, when selecting RZ, users can even select the duty cycle of the signal. If they choose 25 percent, for instance, a logic “1” will return to “0” after 25 percent of the pulse period. In Fig. 25.3.14, a duty cycle of 50 percent was chosen. Random Data. Some pulse generators with data capabilities can generate random data streams, which are also referred to as pseudorandom binary sequences or pseudorandom bit sequences (PRBS). These sequences are often generated according to an international standard. Difference between Data/Word Generators and Pulse Generators. In contrast to pulse generators, most data generators are modular systems that can provide up to 50 channels, or more. Data generators usually exhibit a much higher memory depth and more functionality in terms of sequencing—looping or branching. Some data generators even can provide asynchronous data streams (generated with several independent clock sources). This means that a pulse generator with data capabilities cannot replace a data generator, especially if more than one or two channels are needed. However, having some data capabilities in a pulse generator can be very beneficial.

Frequency Synthesizers Starting in the early 1960s, frequency synthesis techniques have grown steadily in instrument applications and have become the dominant technology in signal sources. Now, almost all signal generators, whether baseband, RF, or microwave, are based on synthesis technology and offer very good frequency resolution, stability, and absolute frequency accuracy. Many also offer the option of locking to an external reference frequency, often 10 MHz. If all the instruments in a system are locked to the same external reference, then there will be no relative frequency error among the instruments. If the external reference has very good absolute frequency stability and absolute accuracy such as a rubidium or cesium atomic clock frequency standard, then the entire system will have the same stability and absolute frequency accuracy.

Radio-Frequency (RF) Signal Generators Frequencies from 100 kHz to 1 GHz. This important class of instrument was developed in the late 1920s when the need was recognized for producing radio receiver test signals with accurate frequencies (1 percent) and amplitudes (1 or 2 dB) over wide ranges. The basic block diagram of these instruments remained nearly unchanged until about 1970, when frequency synthesizer circuits began to usurp free-running oscillators in the waveform generator section of the instruments. Shown in Fig. 25.3.15 is a simplified form of a typical RF sine-wave oscillator used in a signal generator. An amplifier producing an output current proportional to its input voltage (a transconductance amplifier) drives a tunable filter circuit consisting of a parallel LC resonant circuit with a magnetically coupled output tap. The tap is connected to the amplifier input with the proper polarity to provide positive feedback at the resonant frequency of the filter. To assure oscillation, it is necessary to design the circuit so that the gain around the loop—the amplifier gain times the filter transfer function—remains greater than unity over the desired frequency tuning range. But since the output signal amplitude remains constant only when the loop gain is exactly unity, the design must also include levelsensitive control of the loop gain.

Microwave Signal Generators Frequencies from 1 to 30 GHz are usually designated as being in the microwave range. The lower boundary corresponds approximately to the frequency above which lumped-element modeling is no longer adequate for

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.30

SIGNAL SOURCES 25.30

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.3.15 Radio-frequency sine-wave oscillator.

most designs. The range above is commonly referred to as the millimeter range because wavelengths are less than 1 cm. It extends up to frequencies where the small wavelengths compared with practically achievable physical dimensions require quasi-optical techniques to be used for transmission and for component design. Previous generations of microwave sources were designed around vacuum tubes such as the klystron and the backward-wave oscillator. These designs were bulky, often requiring very high voltages and currents, and were subject to drift with environmental variations. More recently, compact solid-state oscillators employing field-effect transistors (FET) or bipolar transistors and tuned by electrically or magnetically variable resonators have been employed with additional benefits in ruggedness, reliability, and stability. Frequency synthesis techniques are now used to provide accurate, programmable sources with excellent frequency stability, and low phase noise. Several varieties of microwave signal generators, each optimized for certain ranges of applications, are described in this section. CW Signal Generators For certain applications, a continuous-wave (CW) signal generator without modulation capability may be all that is required, providing a cost savings over more sophisticated models. Output power is of importance in typical applications such as where the signal generator serves as a local oscillator (LO) driving a mixer in an up- or down-converter. The signal level needs to be high enough to saturate the mixer to assure good amplitude stability and low noise. If the phase noise of the converted signal is not to be degraded, the phase noise of the signal generator must be sufficiently low. Other applications for CW sources include exciters, in transmitter testing, sources driving amplifiers, and modulators. High-level accuracy, low spurious and harmonic signal levels, and good frequency resolution may all be important specifications in these and other applications.

Swept Signal Generators Frequency swept signal generators are used for the test and characterization of components and subsystems and for general-purpose applications. They can be used with scalar and vector network analyzers or with power meters or detectors. Sweep can be continuous across a span, or in precise discrete steps. Techniques exist for phase locking throughout a continuous sweep, allowing for high accuracy in frequency. Step sweep techniques,

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.31

SIGNAL SOURCES SIGNAL SOURCES

25.31

in which the source is phase-locked at each discrete frequency throughout the sweep, are more compatible with some measurement systems.

Signal Generators with Modulation Microwave signal generators designed to test increasingly sophisticated receivers are being called upon to provide a growing variety of modulations at accurately calibrated signal levels over a wide dynamic range, without generating undesired spurious signals and harmonics. Application-oriented signal generators are available, which provide combinations of modulation formats. These range from simple amplitude and frequency modulation to those that employ a wide variety of digital modulation formats in which discrete symbols are transmitted as combinations of phase and amplitude of the carrier. For applications where simulations of complex scenarios involving multiple targets or emitters need to be carried to test radar or Electronic Warfare (EW) receivers, or to perform certain tests on satellite and communications receivers, higher-performance signal generators featuring very fast frequency switching and/or sophisticated modulation capabilities are employed. These sources may also feature a variety of software interfaces that can provide suitable personalities for various receiver test applications, allowing entry of parameters in familiar form and enabling the creation of complex scenarios involving lengthy sequences of a number of signals. Types of Modulation Some of the more commonly required modulation formats found in signal generators are described below, along with some common applications. Pulse Modulation. Pulse modulation is used to simulate target returns to a radar receiver, to simulate active radar transmitters for testing EW surveillance or threat warning receivers, or to simulate pulse code modulation for certain types of communications or telemetry receivers. Microwave signal generators can have inputs for externally applied pulses from a system under test or from a pulse generator. Some microwave signal generators have built-in pulse sources, which may be free-running at selectable rates or can be triggered externally with selectable pulse widths and delays—the latter being used to simulate a variety of distances to a radar target. Amplitude Modulation (AM). In addition to the simulation of microwave signals having AM and for AM to PM (amplitude to phase modulation) conversion measurements, amplitude modulation of a microwave signal generator may be needed for simulation of signals received from remote transmitters in the presence of fading phenomena in the propagation path, or from rotating radar antennas. AM should be externally applicable or internally available over a broad range of modulation frequencies without accompanying undesired (incidental) variations in phase. Frequency Modulation (FM). For applications where signals with FM need to be provided, signal generators with an external input and/or an internal source are available. The modulation index β is defined as β = ∆f/fm where ∆f is the peak frequency deviation and fm is the modulation frequency. I/Q (Vector) Modulation. Digital modulation techniques have essentially supplanted analog modulation methods for communications and broadcasting applications. Modulation is said to be digital if the signal is allowed to assume only one of a set of discrete states (or symbols) during a particular interval when it is to be read. Data are transmitted sequentially at a rate of n bits per symbol, requiring 2n discrete states per symbol. By representing the unmodulated microwave carrier as a vector with unity amplitude and zero phase, as shown in Fig. 25.3.16, we can display various modulation formats on a set of orthogonal axes commonly labeled I (in-phase) and Q (quadrature phase).

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.32

SIGNAL SOURCES 25.32

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.3.16 (a) Unmodulated carrier displayed as a vector with zero phase. (b) Some examples of digital modulation formats. In biphase modulation, there are two states characterized by a carrier at 0° or 18° of relative phase. QPSK (quadrature phase shift keying) has the four states shown and 8-psk (eight-phase shift keying) has the eight states of equal amplitude. The last example illustrates 16QAM (quadrature amplitude modulated) where there are three different amplitudes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.33

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.4

LOGIC AND PROTOCOL ANALYZERS Steven B. Warntjes, Steve Witt

LOGIC ANALYZERS Steven B. Warntjes Introduction The advent of digital circuits dramatically changed the concerns of engineers and technicians working with electronic circuits. Ignoring for a moment digital signal quality or signal integrity, the issues switched from the world of bias points and frequency response to the world of logic ones, zeros, and logic states (see Fig. 25.4.1). This world has been called the data domain. Using off-the-shelf components virtually guarantees correct values of voltage and current if clocks are kept to moderate speeds—less than 50 MHz—and fan-in/fan-out rules are observed. The objective for circuit verification and testing focuses on questions of proper function and timing. Although parametric considerations are simplified, the functional complexity and sheer number of circuit nodes are increased tremendously. Measurements to address these questions and to manage the increased complexity are the forte of the logic analyzer (Fig. 25.4.2). Logic analyzers collect and display information in the format and language of digital circuits. Microprocessors and microcontrollers are the most common logic-state machines. Software written in either high-level languages, such as C, or in the unique form of a processor’s instruction set—assembly language—provide the direction for these state machines that populate every level of electronic products. Most logic analyzers can be configured to format their output as a sequence of assembly processor instructions or as high-level language source code. This makes them very useful for debugging software. For realtime or time-crucial embedded controllers, a logic analyzer is a superb tool to trace program flow and to measure event timing. Because logic analyzers do not affect the behavior of processors, they are excellent tools for system performance analysis and verification of real-time interactions. Data-stream analysis is also an excellent application for logic analyzers. A stream of data from a digital signal processor or digital communications channel can be easily captured, analyzed, or uploaded to a computer.

Basic Operation The basic operation of logic analysis consists of acquiring data in two modes of operation: asynchronous timing mode and synchronous state mode. Logic analysis also consists of emulation solutions used to control the processor in the embedded system under development and the ability to time correlate high-level language source with the captured logic analyzer information.

25.33 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.34

LOGIC AND PROTOCOL ANALYZERS 25.34

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.4.1 (a) Logic timing diagram. Logic values versus time is shown for four signals. (b) Logic state diagram. Inputs I and S control transitions from state to state. O and E are outputs set to new values on entry to each state.

Asynchronous Mode. On screen, the asynchronous mode looks very much like an oscilloscope display. Waveforms are shown, but in contrast to an oscilloscope’s two or four channels, there are a large number of channels—eight to several hundred. The signals being probed are recorded either as a 1 or a 0. Voltage variation— other than being above or below the specified logic threshold—is ignored, just as the physical logic elements would do. In Fig. 25.4.3, an analog waveform is compared with its digital equivalent. A logical view of signal timing is captured. As with an oscilloscope, the logic analyzer in the timing mode provides the time base that determines when data values are clocked into instrument storage. This time base is referred to as the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.35

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.35

FIGURE 25.4.2 Logic analyzers are supplied as PC-hosted or benchtop versions. Shown here at right is the Agilent Technologies 1680 Benchtop Protocol Analyzer. Note the pods that connect to the device under test in the foreground.

internal clock. A sample logic analyzer display, showing waveforms captured in timing mode, appears in Fig. 25.4.4. Synchronous Mode. The synchronous state mode samples the signal values into memory on a clock edge supplied by the system under test. This signal is referred to as the external clock. Just as a flip-flop takes on data values only when clocked, the logic analyzer samples new data values or states only when directed by the clock signal. Groupings of these signals can represent state variables. The logic analyzer displays the

FIGURE 25.4.3 Analog versus digital representations of a signal.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.36

LOGIC AND PROTOCOL ANALYZERS 25.36

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.4.4 Timing mode display. Display and acquisition controls are at the top. Waveforms are displayed on the bottom two-thirds of the display. Note the multibit values shown for “A8-15.”

progression of states represented by these variables. A sample logic analyzer display showing a trace listing of a microprocessor’s bus cycles, in state mode, is shown in Fig. 25.4.5.

Block Diagram An understanding of how logic analyzers work can be gleaned from the block diagram in Fig. 25.4.6. Logic analyzers have six key functions: the probes, the high-speed memory, the trigger block, the clock generator, the storage qualifier, and the user interface.

FIGURE 25.4.5 State mode display. Listing shows inverse assembly of microprocessor bus cycles.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.37

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.37

FIGURE 25.4.6 Logic analyzer block diagram.

Probes. The first function block is the probes. The function of the probes is to establish physical connections with the target circuit under test. To maintain proper operation of the target circuit, it is vital that the probes do not unduly load down the logic signal of interest or disturb its timing. It is common for these probes to operate as voltage dividers. By dividing down the input signal, voltage comparators in the probe function are presented with the lowest possible voltage slew rate. Higher-speed signals can be captured with this approach. The voltage comparators transform the input signals into logic values. Different logic families, such as TTL, ECL, and CMOS, have different voltage thresholds, so the comparators must have adjustable thresholds. High-Speed Memory. The second function is high-speed memory, which stores the sampled logic values. The memory address for a given sample is supplied internally. The typical memory depth is hundreds of thousands of samples. Some analyzers can store several megasamples. Usually the analyzer user is interested in observing the logic signals around some event. This event is called the measurement trigger. It is described in the next functional block. Samples have a timing or sequence relationship with the trigger event, but are arbitrarily placed in the sample memory, depending on the instantaneous value of the internally supplied address. The memory appears to the user as a continuously looping storage system. Trigger Block. The third functional block is the trigger block. Trigger events are a user-specified pattern of logical ones and zeros on selected input signals. Shown in Fig. 25.4.7 is how a sample trigger pattern corresponds with timing and state data streams. Some form of logic comparator is used to recognize the pattern of interest. Once the trigger event occurs, the storage memory continues to store a selected number of post-trigger samples. Once the post-trigger store is complete, the measurement is stopped. Because the storage memory operates as a loop, samples before the trigger event are captured, representing time before the event. Sometimes this pre-trigger capture is referred to as negative time capture. When searching for the causes of a malfunctioning logic circuit, the ability to view events leading up to the problem—the trigger event—makes the logic analyzer extremely useful. Clock Generator. The fourth block is the clock generator. Depending on which of the two operating modes is selected, state or timing, sample clocks are either user supplied or instrument supplied. In the state mode, the analyzer clocks in a sample based on a rising or falling pulse edge of an input signal. The clock generator function increases the usability of the instrument by forming a clock from several input signals. It forms the clocking signal by ORing or ANDing input signals together. The user could create a composite clock using

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.38

LOGIC AND PROTOCOL ANALYZERS 25.38

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.4.7 Example of trigger pattern showing match found with timing mode data and then state mode data. Trigger pattern is “1 0 1” for input signals S1, S2, and S3.

logic elements in the circuit under test, but it is usually more convenient to let the analyzer’s clock generator function do it. In timing mode, two different approaches are used to generate the sample clock. Some instruments offer both approaches, so understanding the two methods will help obtain more from the instrument. The first approach, or continuous storage mode, simply generates a sample clock at the selected rate. Regardless of the activity occurring on the input signals, the logic values at the time of the internal clock are entered in memory (see Fig. 25.4.8). The second approach is called transitional timing mode. The input signals are again sampled at a selected rate. The clock-generator-function clocks the input signal values into memory only if one or more signals change their value. Measurements use memory more efficiently because storage locations are used only if the inputs change. For each sample, a time stamp is recorded. Additional memory is required to store the time stamp. The advantage of this approach over continuous storage is that long-time records of infrequent activity or bursts of finely timed events can be recorded, as shown in Fig. 25.4.9. Storage Qualifier. The fifth function is the storage qualifier. It also has a role in determining which data samples are clocked into memory. As samples are clocked, either externally or internally, the storage qualifier function looks at the sampled data and tests them against a criterion. Like the trigger event, the qualifying

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.39

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.39

FIGURE 25.4.8 Continuous storage mode. A sample value is captured at each sample clock and stored in memory.

criterion is usually a one-zero pattern of the incoming signal. If the criterion is met, then the clocked sample is stored in memory. If the circuit under test is a microprocessor bus, this function can be used to separate bus cycles, steering them to a specific input/output (I/O) port—distinct from instruction cycles or cycles steered to other ports. User Interface. The sixth function, the user interface, allows the user to set up and observe the outcome of measurements. Benchtop analyzers typically use a dedicated keyboard and either a cathode-ray tube (CRT) or a liquid crystal display (LCD). Many products use graphic user interfaces similar to those available on personal computers. Pull-down menus, dialog boxes, touch screens, and mouse pointing devices are available. Since logic analyzers are used sporadically in the debug process, careful attention to a user interface that is easy to learn and use is advised when purchasing. Not all users operate the instrument from the built-in keyboard and screen. Some operate from a personal computer or workstation. In this case, the user interface is the remote interface: IEEE-488 or local area network (LAN). Likewise, the remote interface could be the user’s Web browser of choice, if the logic analyzer is Web enabled.

FIGURE 25.4.9 Transitional storage mode. The input signal is captured at each sample clock, but is stored into memory only when the data changes. A time value is stored at each change so that the waveform can be reconstructed properly.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.40

LOGIC AND PROTOCOL ANALYZERS

FIGURE 25.4.10 Example of an emulation solution with a PC connected to a customer target to control the processor under development.

FIGURE 25.4.11 Correlated high-level language display. Lower right is the high-level source code; lower left, the assembly code listing. Upper left is a time-correlated oscilloscope display; upper middle, a timing waveform display; upper right, a performance analysis snapshot.

25.40 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.41

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.41

Emulation Solutions An emulation solution is a connection to the processor in the embedded system used to control the program execution. The connection is usually on the order of five to 10 processor signals. These signals control the program execution, including the state of the processor, such as running, reset, or single stepping, and also the ability to quickly download the processor’s executable code and examine/modify processor memory and registers. This processor control is usually accomplished through dedicated, on-processor debug resources. An emulation solution enable the user to perform software and hardware debugging as well as time correlate software and hardware activity (see Fig. 25.4.10).

High-Level Language Source Correlation High-level language source correlation provides a real-time trace or acquisition of processor address, data, and status information linked to a software high-level source language view. This information is then time correlated to activity captured by the rest of the logic analyzer’s acquisition modules, such as oscilloscope channels. Symbols from the user’s software program can also be used to specify trigger conditions and are listed in the analyzer display (Fig. 25.4.11). This feature uses the information provided in the object file from the customer’s compiler to build a database of source files, line numbers, and symbolic information. The HLL source correlation is a nonintrusive tool that typically does not require any major changes in the software compilation process.

PROTOCOL ANALYZERS Steve Witt Introduction Computer communication networks are made up of many different computer systems, applications, and network topologies. The capital investment in cabling and transmission infrastructure is massive. The number of users demanding access to computer networks is ever increasing and these users are demanding more bandwidth, increased performance, and new applications. There is a constant stream of new equipment and services being introduced in the marketplace. In this complex environment, computer networking is made possible only by equipment and services vendors adhering to standards covering protocols, physical connectors, electrical interfaces, topologies, and data formats. Protocol analysis is used to ensure that the products implemented according to these standards behave as per specifications. The term protocol analyzer describes a class of instruments that are dedicated to performing protocol analysis. The instruments are special-purpose computer systems that act as a node on the network, but, unlike a typical node on the network, monitor and capture all of the network traffic for analysis and testing. Protocol Definition. Generally speaking, a protocol is a code or a set of rules specifying the correct procedure for a diplomatic exchange. In terms of computer networks, a protocol is a specific set of rules, procedures, and conventions defining the format and timing of data transmission between devices connected to a computer network. Protocols are defined so that devices communicating on a computer network can exchange information in a useful and efficient manner. Protocols handle synchronization, addressing, error correction, header and control information, data transfer, routing, fragmentation and reassembly, encapsulation, and flow control. Protocols provide a means for exchanging information so that computer systems can provide services and applications to end users. The wide range of protocols in use is a result of the wide range of transmission medium in use, the wide range of applications and services available to end users, and the numerous independent organizations and vendors creating protocol standards. Communication via a computer network is possible only when there is an agreed-upon format for the exchange of information and when there is a common understanding of the content of the information being

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.42

LOGIC AND PROTOCOL ANALYZERS 25.42

INSTRUMENTATION AND TEST SYSTEMS

exchanged. Therefore, protocols must be defined by both semantic and syntactic rules. Semantics refers to the meaning of the information in the frame of data including control information for coordination and error handling. An example of the semantic information in a frame is a request to establish a connection, initiated by one computer and sent to another computer. Syntax refers to the structure, arrangement, and order of the protocol including data format and signal levels. An example of the syntax of a protocol is the relative position of a protocol field in the frame, such as the network address. Protocol analysis is concerned with both the syntax and the semantics of the protocols. Protocol Standards. Communication between devices connected to computer networks is controlled by transmission and protocol standards and recommendations. These standards are necessary for different vendors to offer equipment and services that interoperate with one another. While standards can be defined and implemented in the private sector by computer and network component vendors such as Cisco Systems, HewlettPackard, and IBM, most standards and recommendations are created by organizations including but not limited to ANSI (American National Standards Institute), CCITT (Consultative Committee on International Telegraphy and Telephony), ETSI (European Telecommunications Standards Institute), IEEE (Institute of Electrical and Electronic Engineers), ISO (International Standards Organization), and the ITU (International Telecommunications Union). The ATM Forum and the IETF (Internet Engineering Task Force) are technical working bodies that develop standards for networking products. The OSI Reference Model. The ISO, located in Geneva, Switzerland, is responsible for many of the international standards in computer networking. The ISO defined a model for computer communications networking called the Open Systems Interconnection Reference Model. This model, commonly called the OSI model, defines an open framework for two computer systems to communicate with one another via a communications network. The OSI model (see Fig. 25.4.12) defines a structured, hierarchical network architecture. The OSI model consists of the communications subnet, protocol layers 1 to 3, and the services that interface to the applications executing in the host computer systems (protocol layers 4 to 7). The combined set of protocol layers 1 to 7 is often referred to as a protocol stack. Layer 7, the applications layer, is the interface to the user application executing in the host computer system. Each layer in the protocol stack has a software interface to the application below it and above it. The protocol stack executes in a host computer system. The only actual physical connection between devices on the network is at the physical layer where the interface hardware connects to the physical media. However, there is a logical connection between the two corresponding layers in communicating protocol stacks. For example, the two network layers (layer 3 in the OSI reference model) in two protocol stacks operate as if they were communicating directly with one another, when in actuality they are communicating by exchanging information through their respective data link layers (layer 2 in the OSI reference model) and physical layers (layer 1 in the OSI reference model). Current network architectures are hierarchical, structured, and based in some manner on the OSI reference model. The functionality described in the OSI reference model is embodied in current network architectures, albeit at different layers or combined into multiple layers. Network Troubleshooting Tools. There are two broad categories of products used to implement and manage computer networks—those that test the transmission network and those that test the protocol information transferred over the transmission network. Testing the protocols is commonly referred to as protocol analysis and it can be accomplished with several different types of products including: Network Management Systems—comprehensive, integrated networkwide systems for managing and administrating systems and networks. Protocol analysis is one of many applications performed by network management systems. Network troubleshooting is performed by acquiring network data from devices on the network and from instrument probes distributed through the network. Distributed Monitoring Systems—performance monitoring and troubleshooting applications that are implemented with instrument probes or protocol analyzers that are distributed throughout the network. The probes and analyzers are controlled with a management application running on a workstation or PC.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.43

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.43

FIGURE 25.4.12 OSI model.

Protocol Analyzers—specialized instrumentation dedicated to protocol analysis. Protocol analyzers are used to troubleshoot network problems and to monitor the performance of networks. Handheld Test Tools—special purpose tools that are very small, lightweight, and usually battery operated. They perform a variety of measurements such as continuity tests, transmission tests, and simple protocol analysis measurements such as simple statistics and connectivity tests. The Need for Protocol Analysis. In order for two applications running on two different computer systems to communicate with one another (e.g., a database application executing on a server and a client application performing database queries), meaningful information must be continually, efficiently, and correctly exchanged. This requires a physical connection to exist, either twisted pair copper wire, coaxial cable, optical fiber, or wireless (e.g., radio transmission). The physical characteristics and specifications of the transmission media must be standardized so that different computer systems can be electrically connected to one another. The bit streams exchanged over the physical media must be encoded such that the analog stream of information can be converted to digital signals. In order for two people to effectively communicate they must speak the same language. Similarly, for two computer systems to communicate

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.44

LOGIC AND PROTOCOL ANALYZERS 25.44

INSTRUMENTATION AND TEST SYSTEMS

they must speak the same “language.” Therefore, the bit stream must conform to a standard that defines the encoding scheme, the bit order (least significant bit first or most significant bit first), and the bit sense (a high value defined either as a 1 or a 0). Any errors in the transmission must be detected and recovered and, if necessary, the data must be retransmitted. A protocol analyzer is used to examine the bit stream and ensure that it conforms to the protocol standards that define the encoding schemes, bit sequences, and error conditions. Once a bit stream can be transmitted and received, the physical communication is established and the exchange of information can be accomplished. Information is exchanged in logical units of information. A protocol frame, packet, message, or cell (in this chapter, the term frame will be used to mean any or all of these) is the logical unit of information transmitted on the physical infrastructure of a computer network. Depending on the type of network and protocols, these frames of data are either fixed in size, such as the 53 byte cells used by ATM networks, or they can be variable in size, such as the 64 to 1518 byte frames used by Ethernet networks. The most fundamental aspect of protocol analysis is the collection and analysis of these frames. Networks usually have more than one path connecting different devices. Therefore, the frames in which the data are contained must be addressed properly so that they can traverse single or multiple routes through the network. Fragmentation and reassembly issues must be properly handled—frames are often disassembled and reassembled so that they can be of a proper size and can be encapsulated with the proper header information to ensure that the intermediate and end devices in the network can properly manipulate them. The network must also handle error conditions such as nodes that stop responding on the network, nodes that transmit error frames or signals, and nodes that use excessive bandwidth. A protocol analyzer is used to examine the addresses of the frames, check fragmentation and reassembly, and investigate errors. Connections are established so that communication is efficient. This prevents the communication channel from being redundantly set up each time a frame is transmitted. This is similar to keeping a voice line open for an entire telephone conversation between two people, rather than making a phone call for each sentence that is exchanged. To ensure this efficiency, connections or conversations are established by devices on the network so that the formal handshaking doesn’t have to be repeated for each information exchange. Protocol analysis includes scrutinizing protocol conversations for efficiency and errors. The data that are transferred to the host system application in the frames must conform to an agreed-upon format, and if necessary it must be converted to an architecture-independent format so that both computer systems can read the information. Each time a user enters a command, downloads a file, starts an application, or queries a database, the preceding sequence of processes is repeated. A computer network is continuously performing these operations in order to execute an end user’s application. Protocol analysis involves critically examining the formatted data that are exchanged between host system applications.

Protocol Analyzers A protocol analyzer is a dedicated, special purpose computer system that acts as a node on the network but, unlike a typical node on the network, it monitors and captures all of the network traffic for analysis and testing (Fig. 25.4.13). Protocol analyzers provide a “window into the network” to allow users to see the type of traffic on the network. Many problems can be solved quickly by determining what type of traffic is or is not present on the network and if protocol errors are occurring. Without a “window into the network” network managers must rely on indirect measures to determine the network behavior, such as observing the nodes attached to the network and the problems the users are experiencing. Protocol analyzers provide capabilities to compare data frames with the protocol standards (protocol decodes), load and stress networks with traffic generation, and monitor network performance with statistical analysis. The measurement capabilities have expanded to focus on a broad set of applications such as network troubleshooting, network performance monitoring, network planning, network security, protocol conformance testing, and network equipment development. These new applications go far beyond simply examining network traffic with protocol decodes. The term protocol analyzer most commonly refers to the portable instruments that are dispatched to trouble sites on the network, i.e., the critical links or segments that are experiencing problems. The emphasis in these products is

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.45

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.45

FIGURE 25.4.13 Protocol analyzers come in a number of physical configurations. Shown here are 3 Agilent Technologies E2920 PCI/PCI-X Exerciser and Analyzer cards plugged into a server undergoing production testing.

portability––products that are light-weight, rugged, and include the maximum amount of functionality. Network troubleshooters require a product that connects to any point in an internetwork, regardless of the physical implementation of the network. Therefore, most protocol analyzers are able to accommodate multiple network interface modules. To avoid multiple trips back to the office to get additional equipment, network troubleshooters require a “one handle solution,” a product that integrates as much test capability as possible into one portable product. Portable protocol analyzers are most commonly used for network troubleshooting in installation and maintenance applications. Such products focus on installing networks and troubleshooting network problems. Installing networks requires the network engineer to stress the network or network devices using scripts that emulate setting up logical connections, placing calls, and creating high traffic scenarios. Troubleshooting network problems requires that the network engineer have extensive protocol decodes available to examine whatever frames are present on the network. Protocol statistics and expert analysis are used to identify network errors.

Protocol Analysis A network fault is any degradation in the expected service of the network. Examples of service degradation include an excessively high level of bit errors on the transmission medium, a single user monopolizing the network bandwidth, a misconfigured device, or a software defect in a device on the network. Regardless of the cause, the network manager’s fundamental responsibility is to ensure that such problems are fixed and that the expected level of service is restored. To accomplish this, the network manager must troubleshoot the network and isolate faults when the inevitable problems occur. Many of the difficult to detect, intermittent problems can only be recreated if the network is stressed by sending frames. Many of the network problems are caused by the constantly changing configurations of the devices attached to the network, so the network manager must manage the configuration of the network devices. In order to ensure

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.46

LOGIC AND PROTOCOL ANALYZERS 25.46

INSTRUMENTATION AND TEST SYSTEMS

the bandwidth and performance that users demand, the network manager must monitor the performance of the network and plan accordingly for future growth. Network managers are also concerned with the security of their networks and use protocol analysis tools to monitor for illegal or unauthorized frames on network segments. Protocol Analysis Applications Fault isolation and troubleshooting. Most network problems (e.g., network down time) are solved by following a rigorous troubleshooting methodology. This methodology, not unique to network troubleshooting, consists of observing that a problem has occurred, gathering data about the problem, formulating a hypothesis, and then proving or disproving the hypothesis. This process is repeated until the problems are resolved. Protocol analysis is used in the network troubleshooting process for first observing that a problem has occurred, and next gathering data (using protocol analysis measurements such as protocol decodes and protocol statistics, as described in the section entitled “Protocol Analysis Measurements”). The user then formulates a hypothesis and uses the protocol analysis measurements to confirm cause of the problem, ultimately leading to a solution. The protocol analyzers can then be used to confirm that the problem has indeed been repaired. Performance monitoring. Determining the current utilization of the network, the protocols in use, the errors occurring, the applications executing, and the users on the network is critical to understanding if the network is functioning properly or if problems such as insufficient capacity exist. Monitoring performance can be used over short time periods to troubleshoot problems or it can be used over long time periods to determine traffic profiles and optimize the configuration and topology of the network. Network baselining. Every network is unique—different applications, distinct traffic profiles, products from numerous vendors, and varying topologies. Therefore, network managers must determine what is normal operation for their particular network. A network baseline is performed to determine the profile of a particular network over time. A profile is made up of statistical data including a network map, the number of users, protocols in use, error information, and traffic levels. This information is recorded on a regular basis (typically daily or weekly) and compared to previously recorded results. The baselining information is used to generate reports describing the network topology, performance, and operation. It is used to evaluate network operation, isolate traffic-related problems, access the impact of hardware and software changes, and plan for future growth. Security. Networks are interconnected on a global scale; therefore, it is possible for networks to be illegally accessed. Illegal access can be knowingly performed by someone with criminal intent or it can be the result of an error configuration of a device on the network. Protocol analysis tools, with their powerful filtering, triggering, and decoding capabilities can detect security violations. Stress testing. Many errors on networks are intermittent and can only be recreated by generating traffic to stress network traffic levels, error levels, or by creating specific frame sequences and capturing all of the data. By stress testing a network and observing the results with protocol statistics and decodes many difficult problems can be detected. Network mapping. Networks continually grow and change, so one of the big challenges facing network engineers and managers is determining the current topology and configuration of the network. Protocol analysis tools are used to provide automatic node lists of all users connected to the network as well as graphical maps of the nodes and the internetworks. This information is used to facilitate the troubleshooting process by quickly being able to locate users. It is also used as a reference to know the number and location of network users to plan for network growth. Connectivity testing. Many network problems are the result of not being able to establish a connection between two devices on the network. A protocol analyzer can become a node on the network and send frames, such as a PING, to a device on the network and determine if a response was sent and the response time. A more sophisticated test can determine, in the case of multiple paths through the network, which paths were taken. In many WANs connectivity can be verified by executing a call placement sequence that establishes a call connection to enable a data transfer. Conformance testing. Conformance testing is used to test data communications devices for conformance to specific standards. These conformance tests consist of a set of test suites (or scenarios) that exercise data

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.47

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.47

communications equipment fully and identify procedural violations that will cause problems. These conformance tests are used by developers of data communications equipment and by carriers to prevent procedural errors before connection to the network is allowed. Conformance tests are based on the applicable protocol standard. Protocol Analysis Measurements. Protocol analyzer functionality varies depending on the network technology (e.g., LAN vs. WAN), the targeted user (e.g., R&D vs. installation), and the specific application (e.g., fault isolation vs. performance monitoring). Protocol analysis includes the entire set of measurements that allow a user to analyze the information on a computer communications network. These measurements include:

• • • • • • •

Protocol decodes Protocol statistics Expert analysis Traffic generation Bit error rate tests Stimulus/response testing Simulation

Because protocol analysis requires associating the results of different measurements, the above measurements are typically made in combination. Thus, protocol analyzers include different sets or combination of these measurements. A good user interface combines pertinent information for the user and integrates the results of the different measurements. Protocol decodes. Protocol decodes, also referred to as packet traces, interpret the bit streams being transmitted on the physical media. A protocol decode actually decodes the transmitted bit stream. The bit stream is identified and broken into fields of information. The decoded fields are compared to the expected values in the protocol standards, and information is displayed as values, symbols, and text. If unexpected values are encountered, then an error is flagged on the decode display. Protocol decodes follow the individual conversations and point out the significance of the frames on the network by matching replies with requests, monitoring packet sequencing activity, and flagging errors. Protocol decodes let the user analyze data frames in detail by presenting the frames in a variety of formats. Figure 25.4.14a shows a summary of protocol packets that have been captured on the network. Figure 25.4.14b shows a detailed view of one particular packet, in this case, the packet with reference number 4264. It displays all of the detailed fields of an http packet. Protocol statistics. An average high-speed computer network such as a 10 Mbit/s Ethernet handles thousands of frames per second. Protocol statistics reduce the volumes of data captured into meaningful statistical information providing valuable insight for determining the performance of a network, pinpointing bottlenecks, isolating nodes or stations with errors, and identifying stations on the network. Protocol analyzers keep track of hundreds of different statistics. The statistical information is typically displayed as histograms, tables, line graphs, and matrices. Expert analysis. Troubleshooting computer networks is made complicated by the wide range of network architectures, protocols, and applications that are simultaneously in use on a typical network. Expert analysis reduces thousands of frames to a handful of significant events by examining the individual frames and the protocol conversations for indications of network problems. It watches continuously for router and bridge misconfigurations, slow file transfers, inefficient window sizes, connection resets, and many other problems. Thus, data are transformed into meaningful diagnostic information. Traffic generation. Many network problems are very difficult to diagnose because they occur intermittently, often showing up only under peak load. Traffic generators provide the capability to simulate network problems by creating a network load that stresses the network or a particular device sending frame sequences on to the network.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.48

LOGIC AND PROTOCOL ANALYZERS 25.48

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.4.14 (a) Summary view of protocol packets as captured on a network. (b) Detailed view of a particular packet, with reference number 4264.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.49

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.49

Bit error rate tests. Bit error rate (BER) tests are transmission tests used to determine the error rate of the transmission media or the end-to-end network. While advanced BER measurements reside in the domain of sophisticated transmission test sets, protocol analysis, particularly in a WAN or ATM environment, often requires a verification of the media. BER tests are performed by transmitting a known bit pattern onto the network, looping it back at a point on the network, and receiving the sequence. The bit error rate is calculated as a percentage of the bits in error compared to the total number of bits received. Stimulus/response testing. While many networking problems can be quickly solved with decodes and statistics, many of the more difficult problems cannot be solved in a nonintrusive manner. In this case, it is necessary to actively communicate with the devices on the network in order to recreate the problem or obtain necessary pieces of information to further isolate a problem. So, the user can actively query or stress the network and observe the results with decodes and statistics. Simulation. In the context of protocol analysis, simulation can take two forms: protocol simulation and protocol emulation. Protocol simulation allows the user to send strings of data containing selected protocol headers along with the encapsulated data. In this way, the operation of a network device can be simulated for the purpose of testing a suspected problem or for establishing a link to confirm operation. Protocol emulators are software that controls the operation of the protocol analyzer automatically. Protocol Analysis Measurement Functions. Protocol analysis consists of using measurements such as those described in the previous section to isolate network problems or monitor network performance. The protocol analysis measurement options described in this section are a set of orthogonal capabilities to the measurements described in the previous section. For example, a protocol analyzer can gather statistics on the network traffic, but a more effective troubleshooting approach is to set a capture filter and run the statistics so only the data between a file server and a router is analyzed. Data capture. The most fundamental attribute of protocol analysis is the ability to capture the traffic from a live network and to store these data into a capture buffer. The captured data can then be analyzed and reanalyzed at the user’s discretion. Once data are captured in the capture buffer they can be repeatedly examined for problems or events of significance. Search criteria, such as filter criteria, are based on address, error, protocol, and bit patterns. Data logging. Many network troubleshooting sessions can be spread over hours and days. The capture buffer on a typical protocol analyzer is filled in seconds on a high-speed network. Therefore, data logging capabilities are crucial for setting up long troubleshooting sessions. The user can specify a file name and a time interval for recording critical information. This information is then regularly stored to hard disk and can be examined by the user at a later time. Information that is typically stored to disk includes frames matching a user specified filter, statistics results, or the results of a programmed test. Filtering. The key to successfully troubleshooting network problems is based on eliminating the unnecessary information and focusing on the critical information that is essential to solving the problem. Computer networks process thousands of frames per second. A protocol analyzer can quickly fill a capture buffer with frames, and a user can sift through protocol decodes searching for the errors. But this is a time-consuming and tedious task. The most powerful function of protocol analysis is the ability to filter the data on the network in order to isolate problems. The function of a filter is very similar to that of a trigger (sometimes called a trap). Specific filter patterns are set by the user of a protocol analyzer, and these filters are then compared with the data from the network. Filters range from simple bit pattern matching to sophisticated combinations of address and protocol characteristics. Fundamentally, there are two types of filters—capture filters and display filters. Capture filters are used to either include or exclude data from being stored in a protocol analyzer’s capture buffer. Capture filters make it possible to only collect the frames of interest by eliminating extraneous frames. This effectively increases the usage of the capture buffer. Rather than a capture buffer that only contains six error frames out of 40,000 captured frames in the buffer, the data are filtered so that only error frames are in the buffer. More frames of interest can be captured, and they can be located more quickly. A disadvantage of capture filters is that it is necessary for the user to know what to filter, i.e., to have some idea of what problem to investigate. A second disadvantage is that the frames that were filtered out may contain the sequence of events leading up to the error frame. In many situations the source of a network problem is not known; therefore it is necessary to capture all of the frames on the network and use display filters to repeatedly filter the frames. Because all of the frames are

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.50

LOGIC AND PROTOCOL ANALYZERS 25.50

INSTRUMENTATION AND TEST SYSTEMS

stored in the capture buffer, the frames can be played back through the display filters. Display filters act on the frames once they have been captured. Frames can be selected for display by measurements such as protocol decodes. Filter conditions can be combined to form more powerful filter criteria. Typically, as the troubleshooting process progresses, the user discovers more and more information about the network problem. As each new fact is discovered, it can be added to the filter criteria until finally the problem is identified. For example, to isolate a faulty Ethernet network interface card it is necessary to filter on the MAC address of the suspicious node and bad frame check sequence (FCS) simultaneously. Triggers and actions. In order to troubleshoot network problems, it is often necessary to identify specific frames or fields in frames. Triggers are used to detect events of significance to the user and then initiate some action. Triggers and filters operate the same way in terms of recognizing conditions on the network. The parameters for setting trigger criteria are the same as the “filter types.” The trigger is a key capability of protocol analyzers, since it allows the automatic search of a data stream for an event of significance, resulting in some action to be taken. Possible trigger actions include:

• • • • • • • • • • • • •

Visual alarm on the screen Audible alarm Start capturing data in the capture buffer continuously Start capturing data, fill the capture buffer and stop Position the trigger in the capture buffer and stop capturing data End the data capture Increment a counter Start a timer Stop a timer Make an entry in the event log Start a specific measurement Send an SNMP trap Log data to disk

Protocol Analyzer Block Diagram There are three main components to any protocol analyzer:

• Computing platform • Analysis and acquisition system • Line interface The functions of a protocol analyzer are described in Fig. 25.4.15. Computing Platform. The computing platform is a general-purpose processing system—typically a PC or a UNIX workstation. The computing platform executes the user interface for the product and controls the measurements that are executed in the analysis and acquisition systems. It is very common for other applications to be run in conjunction with a protocol analyzer. These include spreadsheet applications for analyzing data and measurement results, network management software, and terminal emulation applications that log into computer systems on the network. Therefore, computing platforms are usually based on an industry standard system that allows a user to openly interact with the application environment. Analysis and Acquisition System. Fundamentally a protocol analyzer acquires data from the network and then analyzes the data. Thus, the analysis and acquisition system is the core of a protocol analyzer. This system

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.51

LOGIC AND PROTOCOL ANALYZERS LOGIC AND PROTOCOL ANALYZERS

25.51

FIGURE 25.4.15 A protocol analyzer block diagram.

is essentially responsible for transferring data from the line interface to the capture buffer, ensuring that all of the error conditions, the protocol state information, and the protocol data are correctly stored and time stamped. During real-time and in postprocess mode the triggers and actions, the timers and counters, and the protocol followers are executed in the analysis and acquisition system. Additionally the measurements are typically executed in a distributed fashion between the computing platform and the analysis and acquisition system. In low-cost software-based analyzers the analysis and acquisition system functions are performed by the

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.52

LOGIC AND PROTOCOL ANALYZERS 25.52

INSTRUMENTATION AND TEST SYSTEMS

computing platform. In high-performance protocol analyzers a dedicated processor and special-purpose hardware are used to implement the processing required by the analysis and acquisition functions. Line Interface. The physical hardware and firmware necessary to actually attach to the network under test are implemented in the line interface. Additionally the line interface includes the necessary transmit circuitry to implement simulation functions for intrusive testing. The function of the line interface is to implement the physical layer of the OSI reference model and provide framed data to the analysis and acquisition system.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.53

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.5

OSCILLOSCOPES Jay A. Alexander

INTRODUCTION The word oscilloscope has evolved to describe any of a variety of electronic instruments used to observe, measure, and record transient physical phenomena and present the results in graphic form (Fig. 25.5.1). Perhaps the popularity and usefulness of the oscilloscope spring from its exploitation of the relationship between vision and understanding. In any event, several generations of technical workers have found it to be an important tool in a wide variety of settings. Basic Functions The prototypical oscilloscope produces a two-dimensional graph with the voltage applied at the input plotted on the vertical axis and time plotted on the horizontal axis (Fig. 25.5.2). Usually the image appears as an illuminated trace on the screen of a cathode-ray tube (CRT) or liquid-crystal display (LCD) and is used to construct a model or representation of how the instantaneous magnitude of some quantity varies during a particular time interval. The quantity measured is often a changing voltage in an electronic circuit. However, it could be something else, such as electric current, acceleration, or light intensity, which has been changed into a voltage by a suitable transducer. The time interval over which the phenomenon is viewed may vary over many orders of magnitude, allowing measurements of events that proceed too quickly to be observed directly with the human senses. Instruments currently available measure events occurring over intervals as short as picoseconds (10−12 s) and up to tens of seconds. The measured quantities can be uniformly repeating or essentially nonrecurring. The most useful oscilloscopes have multiple input channels so that simultaneous observation of multiple phenomena is possible, enabling the measurement of the time relationships among events.

GENERAL OSCILLOSCOPE CONCEPTS General-purpose oscilloscopes are classified as analog oscilloscopes or digital oscilloscopes. Newly produced models are almost exclusively of the digital variety, although many lower-bandwidth (< 100 MHz) analog units are still being used in various industrial and educational settings. Digital oscilloscopes are often called digital storage oscilloscopes (DSOs), for reasons that will become apparent below. Analog and Digital Oscilloscope Basics The classic oscilloscope is the analog form, characterized by the use of a CRT as a direct display device. A beam of electrons, cathode rays, is formed, accelerated, and focused in an electron gun and strikes a phosphor 25.53 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.54

OSCILLOSCOPES 25.54

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.5.1 Epitomizing recent developments in oscilloscopes is the mixed analog and digital signal oscilloscope, recently introduced by several manufacturers . The version shown here is the Agilent Technologies 54642D. Mixed signal oscilloscopes are discussed later.

screen, causing visible light to be emitted from the point of impact (Fig. 25.5.3). The voltage signals to be displayed are amplified and applied directly to vertical deflection plates inside the CRT, resulting in an angular deflection of the electron beam in the vertical direction. This amplifier system is referred to as the vertical amplifier. The linear vertical deflection of the point at which the electron beam strikes the screen is thus proportional to the instantaneous amplitude of the input voltage signal. Another voltage signal, generated inside the oscilloscope and increasing at a uniform rate, is applied directly to the horizontal deflection plates of the CRT, resulting in a simultaneous, uniform, left-to-right horizontal motion of the point at which the electron beam strikes the phosphor screen. The operator of the oscilloscope may specify the rate of this signal using the timeper-division or horizontal scale control. For example, with the control set to 100 µs/div on a typical oscilloscope with 10 horizontal divisions, the entire horizontal extent of the display will represent a time span of 1 ms.

FIGURE 25.5.2 Voltage is plotted on the vertical axis and time horizontally on the classic oscilloscope display.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.55

OSCILLOSCOPES OSCILLOSCOPES

25.55

FIGURE 25.5.3 Analog oscilloscope cathode-ray tube.

The electronic module that generates the signals that sweep the beam horizontally and controls the rate and synchronization of those signals is called the time base. The point on the phosphor screen illuminated by the electron beam thus moves in response to the vertical and horizontal voltages, and the glowing phosphor traces out the desired graph of voltage versus time. The digital oscilloscope has been made practical and useful by advances in the state of the art of digitizing devices called analog-to-digital converters (ADCs). For the purposes of this discussion, an ADC is a device, which at suitable regular intervals measures or samples the instantaneous value of the voltage at the oscilloscope input and converts it into a digital value (a number), representing that instantaneous value (Fig. 25.5.4). The oscilloscope function of recording a voltage signal is achieved by storing in a digital memory a series of samples taken by the ADC. At a later time, the series of numbers can be retrieved from memory and the desired graph of volts versus time can be constructed. The graphing or display process, since it is distinct from the recording process, can be performed in several different ways. The display device can be a CRT employing direct beam deflection methods. More commonly, a raster-scan display, similar to that used in a conventional television receiver or a computer monitor, can be used. The samples may also be plotted on paper using a printer with graphics capability. The digital oscilloscope is usually configured to resemble the traditional analog instrument in the arrangement and labeling of its controls, the features included in the vertical amplifier, and the labeling and presentation of the display. In addition, the circuits that control the sample rate and timing of the data-acquisition cycle are configured to emulate the functions of the time base in the analog instrument. This has allowed users who

FIGURE 25.5.4 Sampling in a digital oscilloscope.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.56

OSCILLOSCOPES 25.56

INSTRUMENTATION AND TEST SYSTEMS

are familiar with analog oscilloscope operation to quickly become proficient with the digital version. Indeed, while there are fundamental and important differences between the two measurement technologies, many common elements and shared requirements exist.

Oscilloscope Probing Most oscilloscope measurements are made with probes of some type. Probes connect the oscilloscope to the signals being measured and form an important part of the overall measurement system. They must both load the circuit minimally and transmit an accurate version of the signal to the oscilloscope. Many varieties of probes are available, and they may be classified in several ways, such as passive versus active, single-ended versus differential, and voltage versus current. Passive probes are the most common. They are relatively inexpensive and are effective for many measurements below 500 MHz. Passive probes typically feature high input resistance, which minimizes loading at low frequencies. To effectively measure signals with frequency content above 1 GHz, active probes are usually employed. Active probes contain amplifiers in the probes themselves, and they present lower capacitive loading to the circuit being measured, which allows higher frequencies to be measured accurately. Because they are more complex, active probes are significantly more expensive than passive probes. Newer active probes are often differential in nature; this reflects the increased use of differential signals in high-speed digital systems. An increasingly important requirement for oscilloscope probes is small physical size. This is driven by the fine-pitch geometry of modern surface mount technology (SMT) components and is particularly important for active probes, which tend to be larger because of their amplifier circuits.

THE ANALOG OSCILLOSCOPE A complete block diagram for a basic two-channel analog oscilloscope is shown in Fig. 25.5.5.

Vertical System The vertical preamps and associated circuitry allow the operator of the oscilloscope to make useful measurements on a variety of input signals. The preamps provide gain so that very small input signals may be measured,

FIGURE 25.5.5 Analog oscilloscope block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.57

OSCILLOSCOPES OSCILLOSCOPES

25.57

and one or more attenuators, usually implemented with switches or relays, are available for reducing the amplitude of large signals. Together, the preamp and attenuator settings typically manifest as a vertical sensitivity or scale control on the oscilloscope’s control panel. The scale control is specified in terms of volts per division (volts/div), where a division corresponds to a fixed fraction (typically 1/8th) of the vertical extent of the display. Thus when the scale control is set to 100 mV/div, for example, a signal with a peak-to-peak amplitude of 800 mV will occupy the full vertical extent of the display. A provision for injecting dc-shift or offset is also provided; this aids in measuring signals that are not symmetric about zero volts. Circuits and controls for changing the input coupling and impedance are often included as well. Coupling options usually consist of ac and dc, and sometimes GROUND, which is useful for quickly viewing a 0 V reference trace on the display. Impedance selections are generally 50 Ω and 1 MΩ. The 50 Ω selection is useful for making measurements where input sources having 50 Ω output impedance are connected to the oscilloscope with 50 Ω cables. In this situation the oscilloscope preserves the impedance of the entire system and does not introduce undesirable effects such as signal reflections. The 50-Ω selection is also used for many active probes, whose output amplifiers are commonly designed to drive a 50-Ω load. The 1-MΩ selection is appropriate for most other measurements, including those employing passive probes.

Trigger In Fig. 25.5.6, the signals occurring at the indicated nodes in Fig. 25.5.5 are shown for a single acquisition cycle of a signal connected to the channel 1 input. The trigger source is set to the internal, channel 1 position, and a positive slope-edge trigger is selected, so a trigger pulse is generated as the input signal crosses the indicated trigger level. In response to the trigger pulse, the ramp signal initiates the motion of the spot position on the display from left to right, and the unblanking gate signal reduces the CRT grid-to-cathode negative bias, causing the spot to become visible on the display screen. The ramp increases at a precisely controlled rate,

FIGURE 25.5.6 Waveforms from a single acquisition cycle.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.58

OSCILLOSCOPES 25.58

INSTRUMENTATION AND TEST SYSTEMS

causing the spot to progress across the display screen at the horizontal rate determined by the current timeper-division setting. When the spot has moved across the horizontal extent of the display, the unblanking gate switches negative, the trigger holdoff period begins, and the ramp retraces the spot to the starting point. At the end of trigger holdoff, the system is ready to recognize the next trigger and begin the next signal acquisition.

Delay Line Some time is necessary after initiation of the trigger pulse for the sweep to attain a linear rate and for the display spot to reach full brightness. Since it is desirable to be able to view the voltage transient that has caused the trigger, a means of storing the input signal during the startup delay is needed. This is accomplished by placing a delay line in the vertical path. The triggering signal at the output of the delay line is visible on the display screen while the unblanking gate signal is at its most positive level (Fig. 25.5.6). A total delay time of between 25 and 200 ns is required depending on the oscilloscope model, with higher bandwidth units requiring shorter delays.

Dual-Trace Operation Oscilloscopes are equipped with two or more channels because the most important measurements compare time and amplitude relationships on multiple signals within the same circuit. However, a conventional analog oscilloscope CRT has only one write beam and thus is inherently capable of displaying only one signal. Thus the channel switch (see Fig. 25.5.5) is used to timeshare, or multiplex, the single display channel among the multiple inputs. The electronically controlled channel switch can be set manually first to channel 1 and then later switched by the user to channel 2. However, a better emulation of simultaneous display is attained by configuring the oscilloscope to automatically and rapidly switch between the channels. Two different switching modes are implemented, called alternate and chop. In alternate mode, the channel switch changes position at the end of each sweep during retrace while the write beam is blanked. This method works best at relatively fast sweep speeds and signal repetition rates. At slow sweep speeds the alternating action becomes apparent, and the illusion of simultaneity is lost. In chop mode, the channel switch is switched rapidly between positions at a rate that is not synchronized with the input signals or the sweep. This method is effective at slower sweep speeds but requires a relatively higher bandwidth output amplifier to accurately process the combined chopped signal. Many analog oscilloscopes provide a control for the user to select between alternate and chop modes as appropriate to their measurement situation.

FIGURE 25.5.7 Main and delayed sweep generator block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.59

OSCILLOSCOPES OSCILLOSCOPES

25.59

FIGURE 25.5.8 Delayed sweep starting when the main sweep ramp exceeds the delay level.

Delayed Sweep Some analog oscilloscope models include a second ramp generator, called a delayed sweep, and a second trigger generator, called a delayed trigger (Fig. 25.5.7), providing an additional method for controlling the placement of the window in time relative to the main trigger event. The horizontal amplifier can still be connected to the output of the main ramp generator, in which case the operation is identical to that of the standard oscilloscope configuration described earlier. A comparator circuit is added whose output (4) switches when the main ramp signal exceeds a voltage called the delay level (Fig. 25.5.8). This dc voltage can be adjusted by the oscilloscope operator using a calibrated front panel control. The main ramp is initiated by the main trigger pulse and increases at a precisely controlled rate (volts per second) determined by the sweep-speed setting. Therefore, the time elapsed between the trigger pulse and comparator output state change is the reading on the delay control (in divisions) multiplied by the sweep-speed setting (in seconds per division). The delayed sweep ramp is initiated after the delay period in one of two ways: The normal method immediately starts the delayed sweep when the delay comparator switches (see Fig. 25.5.8). The delayed trigger mode uses the delay comparator output to arm the delayed trigger circuit. Then when the delayed trigger condition is met, the delayed sweep starts (Fig. 25.5.9). The delayed sweep rate always has a shorter time-per-division setting than the main sweep.

THE DIGITAL OSCILLOSCOPE A block diagram of a two-channel digital oscilloscope is shown in Fig. 25.5.10. Signal acquisition is by means of an ADC. The ADC samples the input signal at regular time intervals and stores the resulting digital values in memory. Once the specified trigger condition is satisfied, the sampling process is interrupted, the stored samples are read from the acquisition memory, and the volts-versus-time waveform is constructed and graphed on the display screen. In some advanced models, the storage and readout of the samples is managed using two or more memory blocks, so that the oscilloscope can continue to acquire new data associated with the next trigger event while the display operation is proceeding. This keeps the display update rate high and helps to reduce the dead time during which the oscilloscope is unable to acquire data. The time interval between samples is called the sample time and is the reciprocal of the sample rate. The signal that regulates the sampling process in the ADCs is called the sample clock, and it is generated and controlled by the time base circuit. A crystal

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.60

OSCILLOSCOPES 25.60

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.5.9 The delayed trigger starts the delayed sweep.

oscillator is used as a reference for the time base to ensure the accuracy of the sample interval and ultimately of the time measurements made using the digital oscilloscope.

Sampling Methods The exact process by which the digital oscilloscope samples the input signal in order to present a displayed waveform is called the sampling method. Three primary methods are employed. In real-time or “single-shot” sampling, a complete memory record of the input signal is captured on every trigger event. This is the most

FIGURE 25.5.10 Digital oscilloscope block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.61

OSCILLOSCOPES OSCILLOSCOPES

25.61

FIGURE 25.5.11 Random repetitive sampling builds up the waveform from multiple data acquisitions. Dots from the same acquisition bear the same number.

conceptually straightforward type of sampling and its main advantage is that it can capture truly transient or nonrecurring events, provided the sample rate is sufficiently fast. Another advantage is the ability to capture “negative time,” or time before the trigger event. This is especially valuable in fault analysis, where the oscilloscope is used to trigger on the fault and the operator may look “backward in time” to ascertain the cause. The main disadvantage of real-time sampling is that the effective bandwidth of the oscilloscope can be no higher than 50 percent of the maximum sample rate of the ADC. When higher bandwidths are desired, random repetitive sampling may be used. This method is also called equivalent-time sampling. In this method, a complete waveform for display is built up from multiple trigger events. The ADC samples at a slower rate, and the samples are taken with a random time offset from the trigger event in order to ensure adequate coverage of all time regions of the input signal. This process is illustrated in Fig. 25.5.11. The major disadvantage of random repetitive sampling is that a repetitive input signal and stable trigger event are required. Many digital oscilloscopes are capable of sampling in either real-time or repetitive mode, with the selection performed either by the operator or automatically by the oscilloscope based on the sweep speed. The final type of sampling is referred to as sequential sampling. In this method, only one point is acquired per trigger event, and successive samplings take place farther away from the trigger point. This is illustrated in Fig. 25.5.12. Sequential sampling is used to achieve even higher bandwidths (20 GHz and above) than are possible with random repetitive sampling due to the nature of the ADCs required. Oscilloscopes that use sequential sampling are not capable of capturing negative time and typically contain very limited trigger functionality.

FIGURE 25.5.12 Sequential sampling captures one sample per trigger and increases the delay tds after each trigger.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.62

OSCILLOSCOPES 25.62

INSTRUMENTATION AND TEST SYSTEMS

Quantization Converting a continuous waveform into a series of discrete values is called quantization and several practical limitations apply to this process, as it is used in the digital oscilloscope. The signal is resolved into discrete levels only if it is within a specific range of voltages (Fig. 25.5.13). If the input signal is outside this range when a sample is taken, either the maximum code or the minimum code will be the output from the ADC. The limited window in voltage is similar to that encountered in the analog oscilloscope CRT. A sufficiently large signal amplitude causes the trace to disappear off the edge of the display screen. As in the analog instrument, the vertical scale and offset controls are used to adjust the input waveform so that the desired range of voltages on the waveform will fall within the digitizer voltage window. Resolution Voltage resolution is determined by the total number of individual codes that can be produced. A larger number permits a smoother and more accurate reproduction of the input waveform, but increases both the cost of the oscilloscope and the difficulty in achieving a high sample rate. ADCs are usually designed to produce a total code count that is an integer power of 2, and a unit capable of 2n levels of resolution is called an n-bit digitizer. Digital oscilloscopes are available in resolutions from 6 to 12 bits, with the resolution varying generally in an inverse relationship to the maximum sample rate. Eight bits is the most frequently used resolution. The best possible intrinsic voltage resolution, expressed as a fraction of the full-scale range, is 2−n, e.g., 0.4 percent for an 8-bit ADC. Many digital oscilloscopes provide averaging and other types of filtering modes that can be used to increase the effective resolution of the data. Acquisition Memory Each sample code produced must be stored immediately in the acquisition memory and so that memory must be capable of accepting data from the digitizer continuously at the oscilloscope’s sample rate. For example, the memory in each channel of an 8bit, 2 GSa/s oscilloscope must store data at the rate of 2 × 109 bytes/s. The memory is arranged in a serially addressed, conceptually circular array (Fig. 25.5.14). Each storage location is written to in order, progressing around the array until every cell has been filled. Then each subsequent sample overwrites what has just become the oldest sample contained anywhere in the memory, the progression continues, and a memory with nm storage locations thereafter always contains the most recent nm samples. The nm samples captured in memory represent a total time of waveform capture of n times the sample time interval. For example, an oscilloscope operating at 2 GSa/s with a 32,000-sample acquisition memory captures a 16-µs segment of the input signal. If sampling and writing to acquisition memory stop immediately when the trigger pulse occurs (delay = 0 in Fig. 25.5.15), then the captured signal entirely precedes the trigger point (Fig. 25.5.15a). Setting delay greater than zero enables sampling to continue for a predetermined number of samples after the trigger point. Thus a signal acquisition could capture part of the record before and part after the trigger (Fig. 25.5.15b), or capture information after the trigFIGURE 25.5.13 An ADC quantizes a continuous signal into discrete values having a specific range and spacing. ger (Fig. 25.5.15c). Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.63

OSCILLOSCOPES OSCILLOSCOPES

25.63

FIGURE 25.5.14 Acquisition memory is arranged in a circular array so that the next sample always overwrites the oldest sample in memory.

In recent years, deep memory (>1 M sample) digital oscilloscopes have become more prevalent. Deep memory is valuable because it allows the oscilloscope to capture longer periods of time while maintaining a high sample rate. To see why this is so, consider an oscilloscope with a maximum sample rate of 5 GSa/s and 10k acquisition memory. If the operator sets the horizontal scale control to 100 us/div, the oscilloscope must capture 1 ms of time in order to present a full screen of information. To do this, it must reduce the sample rate to 10 MSa/s (10k samples/1 ms) or lower. The oscilloscope will exhaust its memory if it samples faster than 10 MSa/s. An oscilloscope with a maximum sample rate of 2 GSa/s but 8M of memory, on the other hand,

FIGURE 25.5.15 Trigger point can be placed anywhere within or preceding the captured record. (a) Delay = 0; (b) delay = nm/2; (c) delay = nm.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.64

OSCILLOSCOPES 25.64

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.5.16 The relationship between memory depth and sample rate.

is able to maintain its maximum sample rate even at the 100 us/div setting. Maintaining sample rate in a digital oscilloscope is important because it reduces the likelihood of aliasing (undersampling) the high-frequency components of the input signal, and ensures that all the signal details will be captured, even at slower sweep speeds. Figure 25.5.16 shows the relationship between sweep speed and actual sample rate for the two oscilloscopes discussed in this section. Deep memory both delays the onset of the sample rate reduction as the oscilloscope is slowed down and reduces the extent of the reduction once it does occur. Advanced Digital Oscilloscope Features Modern digital oscilloscopes typically contain many features that never appeared in their analog counterparts. These include automatic signal scaling, automatic measurements, extensive waveform math functions, signal mask testing, saving and recalling of setup information, waveform data, and display images to internal or external storage media, and integrated calibration and self-test capabilities. Many of these features are made possible by the fact that these instruments are controlled by increasingly powerful microprocessors. Indeed, most digital oscilloscopes now contain more than one microprocessor, with each one focused on a different aspect of the overall product operation. In recent products, one of the processors is often a complete personal computer (PC) that is an integrated part of the oscilloscope. This has led to still more features in the areas of communications and connectivity. For example, PC-based oscilloscopes are currently available with capabilities such as networking, voice control, and web control. Advances in integrated circuit (IC) capabilities have also led to new features such as units with more than two channels; units with advanced trigger modes like pattern, state, sequence, risetime, and pulse width; and units with segmented memory, for storing separate acquisitions from different trigger events for later recall and analysis. Mixed Analog and Digital Signal Oscilloscopes In response to the growing digital signal content in electronic circuits, several manufacturers have developed mixed analog and digital signal oscilloscopes (abbreviated as MSO) that contain 16 digital inputs in addition

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.65

OSCILLOSCOPES OSCILLOSCOPES

25.65

to the two or four standard oscilloscope channels. These digital inputs pass through comparators (1-bit ADCs) and function similar to the inputs on a logic timing analyzer. They are displayed along with the standard channels, and may also be used for triggering and automatic measurements. Measuring up to 20 channels with these oscilloscopes provides the user with more information at one time about his or her circuit. An example of a display from an MSO is shown in Fig. 25.5.1.

THE FUTURE OF OSCILLOSCOPES Oscilloscopes have existed in various forms for over 60 years. They are at once basic general-purpose tools and advanced powerful measurement systems. As with other technology-based products, they will continue to evolve, in response to both emerging user needs and the availability of new technologies. Bandwidths, sample rates, and memory depths will continue to increase, and the trend toward more postacquisition analysis will continue. An area of increased attention is customization and application-specific measurement sets. Another is the combination of classic oscilloscope capabilities with those of other test and measurement products such as logic analyzers and signal generators. Mixed signal oscilloscopes are an example of such a combination. While the primary value of the oscilloscope, showing the user a picture of his or her signals versus time, will remain unchanged, many other attributes will indeed change, likely in as big a way as the digital oscilloscope has changed from its analog predecessor.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.66

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.6

STANDARDS-BASED MODULAR INSTRUMENTS William M. Hayes

INTRODUCTION Modular instruments employ a frame (Fig. 25.6.1) that serves as a host. These frames allow multiple switch, measurement, and source cards to share a common backplane. This makes it possible to configure instruments that can accommodate a range of input/output (I/O) channels. It also makes possible tailoring measurement capabilities to the specific applications being addressed.

Modular Standards The modular standards described below are industry standards:

• • • •

VME standard VXI standard Personal computer (PC) plug-ins CompactPCI standard.

PC plug-ins are not part of a formal instrument standard. However, the ubiquity of the personal computer has made the PC motherboard I/O bus a de facto standard for instruments. Although all these standards are used for instrument systems, only VXI and a derivative of CompactPCI, called PXI, were developed expressly for instrumentation. For general-purpose instrumentation, VXI has the most products. PXI is emerging in the market, and generally offers the same features as VXI. Open standards-based modular instruments are compatible with and can therefore accept products from many different vendors, as well as user-defined and constructed modules. Modular instruments generally employ a computer user interface instead of displays and controls embedded in the instrument’s frame or package. By sharing a computer display, modular instruments save the expense of multiple frontpanel interfaces. Without the traditional front panel, the most common approach to using modular instruments involves writing a test program that configures the instruments, conducts the measurements, and reports results. For this reason, modular instruments typically are supplied with programmatic software interfaces, called drivers, to ease the task of communicating with an instrument module from a programming language.

25.66 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.67

STANDARDS-BASED MODULAR INSTRUMENTS STANDARDS-BASED MODULAR INSTRUMENTS

25.67

FIGURE 25.6.1 A typical modular instrument: VXI modules have been configured in this dual-rack structure, set on castors, to serve as a portable ground support test system for military avionics. (Photo courtesy of Agilent Technologies)

Advantages of Modular Instruments Modular instruments are an excellent choice for high-channel-count measurements, automated test systems, applications where space is at a premium, and complex measurements where several instruments need to be coordinated.

ELEMENTS OF MODULAR INSTRUMENTS Figure 25.6.2 shows the key elements of modular, standard-based instruments including a frame that contains a backplane with measurement and switching modules—all working with the device under test (DUT). Control is by system software, running on the computer, including the programming language and drivers that make up the test program.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.68

STANDARDS-BASED MODULAR INSTRUMENTS 25.68

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.6.2 Key elements of modular standards-based instruments.

THE SYSTEM BACKPLANE At the heart of a modular instrument is the system backplane, or set of buses connecting system modules together. These buses are high-speed parallel buses. All of the modular standards have a data-transfer bus, a master/slave arbitration bus—except ISA PC plug-ins—an interrupt bus and a bus for special functions. The VME (32-bit) backplane is one example.

FORM FACTORS One place to begin understanding the similarities and differences between standards is board size.

Board Size To ensure interchangeability among vendors, standardization of the instrument module’s board size and spacing is important. With the exception of the PC plug-ins, all the other modular forms use Eurocard board sizes (see Table 25.6.1). These board sizes were standardized as IEEE 1101.1, ANSI 310-C, and IEC 297.

VME and VXI Standards The VME standard uses the first two sizes, referring to them as single-height and double-height boards. The VXI standard uses all four sizes and refers to them as sizes A, B, C, and D, respectively. Most VXI manufacturers have adopted the B and C sizes. Compact PCI uses sizes 1 and 2, referring to the Eurocard 3U and 6U sizes.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.69

STANDARDS-BASED MODULAR INSTRUMENTS STANDARDS-BASED MODULAR INSTRUMENTS

25.69

TABLE 25.6.1 Eurocard Sizes Eurocard Size

VME name

VXI name

CompactPCI name

10 × 16 cm 23 × 16 cm 23 × 34 cm 36 × 34 cm

Single height Double height — —

A B C D

3U 6U — —

PC Plug-in Modules PC plug-in modules use the full- and half-card sizes adopted in the original IBM PC. The board size is approximately 12.2 cm × 33.5 cm for the full-size card and 12.2 cm × 18.3 cm for the half-size card.

VMEBUS (VME STANDS FOR VERSAMODULE EUROCARD) The VMEbus specification was developed for microcomputer systems using single or multiple microprocessors. The specification was not originally intended for instrumentation, but the concept of computer functions integrated with measurement and control functions on a single backplane was a factor in its creation. The VMEbus International Trade Association (VITA) released the specification in August 1982. It was approved as IEEE 1014-87 in March 1987. In 1994, the VMEbus specification was amended to include 64-bit buses and became known as VME64. Additional backplane connectors are involved with VME64, so care in choosing system components is required. VMEbus products are used in a wide variety of applications including industrial controls and telecommunications.

VME System Hardware Components A VME system comprises one or more frames, an embedded controller module, optionally various computer storage and I/O modules (e.g., LAN), and various measurement and switch modules. There is not a common programming model or instrument driver standard. As with other devices on a computer bus, programs must read/write device registers. In some cases, vendors supply software functions or subroutines to help with the programming. VME Frames. Frames refer to the backplane, power supply as well as the packaging that encloses a system (Fig. 25.6.3). System builders can choose a solution at many different levels from the component level (i.e., a backplane) to complete powered desktop enclosures. Backplanes, subracks, and enclosures are available from many manufacturers, in single-, double-, and mixed-height sizes. Typically a 19-inch rack will accommodate 21 cards. Power supplies are available from 150 to 1500 W. VME-Embedded Computers. VME is well suited for building single- and multiple-processor computer systems. A large number of embedded processors are available. VME Switching. Many measurement systems require switching between measurement channel and signal paths from the device being tested. Typically these switches are multiplexers or matrix switches. There are several simple relay cards available in VME, but few multiplexers or matrix switches. VME Software. Many VME systems are developed with real-time operating systems. Software is available for VxWorks, pSOS+, OS-9, and various other computer operating systems. Source code for the C programming language is the most common. There is no standardized driver software.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.70

STANDARDS-BASED MODULAR INSTRUMENTS 25.70

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.6.3 VME Backplanes and subracks are available in many different configurations. (Photo courtesy of APW)

VXI (VMEBUS EXTENSIONS FOR INSTRUMENTATION) The VXI standard is a derivative of the VME standard. It was driven by a U.S. Air Force program in the 1980s to design a single Instrument-on-A-Card (IAC) standard with the objective to substantially reduce the size of electronic test and operational systems. In April 1987, a number of companies started discussions on creating an IAC standard that would benefit both the commercial and military test communities. The VXIbus consortium was formed and the VXIbus system specification was released in July of 1987. The IEEE adopted it as IEEE 1155 in 1992. After the basic VXIbus specification was adopted, it was recognized that the software aspect of test system development should be addressed. In 1993, the VXIplug&play Systems Alliance was formed. Several system frameworks were defined, based on established industry-standard operating systems. The Alliance released Revision 1.0 of the VXIplug&play specification in 1994; the latest revision, Revision 2, was released in 1998. Common VXI Functions and Applications. including:

VXI is used for most instrument and switching applications,

• Automated test systems. With its aerospace/defense roots, some of the specific applications have been for weapons and command/control systems, and as the foundation for operational systems, such as an artillery firing control system. • Manufacturing test systems. Examples include cellular phone testing and testers for automotive enginecontrol modules. • Data acquisition. In particular, DAC applications include physical measurements such as temperature and strain. Examples include complex and sophisticated measurements, such as satellite thermal vacuum testing or aircraft stress analysis. VXIl Standard The VXI standard including VXIplug&play is a comprehensive system-level specification. It defines backplane functionality, electrical specifications, mechanical specifications, power management, electromagnetic

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.71

STANDARDS-BASED MODULAR INSTRUMENTS STANDARDS-BASED MODULAR INSTRUMENTS

25.71

compatibility (EMC), system cooling, and programming model. The backplane functionality has features intended specifically for instrumentation.

VXI System Hardware Components VXI systems can be configured with either an external computer connected to a VXI mainframe, or with an embedded computer. External computers are the more common configuration because the cost is low and they allow the user to choose the latest and fastest PC available. A VXI system configured with an embedded computer is formed with an VXI computer module, a mainframe, various measurement and switching modules, and the supporting software including VXI Plug-and-Play drivers. VXI Mainframes. VXI system builders have a choice of backplanes or complete mainframes. Mainframes are the most common choice (Fig. 25.6.4). They include backplane, power supply, and enclosure. B-size mainframes are typically available with 9 or 20 slots and supply up to 300 W. Mainframes for C-size modules are available with 4, 5, 6, or 13 slots. D-size mainframes are available with 5 or 13 slots. Power capabilities range from 550 to 1500 W. Backplanes are available with basically the same slot choices as mainframes. VXI External Connection to Computers. A VXI module is required to connect and communicate with an external computer. Two choices are available: Slot 0 controllers and bus extenders. The most common bus extender is called MXIbus. Slot 0 controllers and bus extenders must be placed in a unique slot of the mainframe. They receive commands from the external computer, interpret them and then direct them to the various measurement and switch modules. Both provide special VXI functions that include the Resource Manager. Bus extenders translate a personal computer’s I/O bus to the VXI backplane using a unique highspeed parallel bus. Transactions over a bus extender have a register-to-register bit level flavor. The advantage of bus extenders is high speed. Cable lengths must be relatively short.

FIGURE 25.6.4 A 13-slot C-size VXI mainframe. Notice the system monitoring displays, which are unique, relative to other modular instrument types. (Photo courtesy of Agilent Technologies)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.72

STANDARDS-BASED MODULAR INSTRUMENTS 25.72

INSTRUMENTATION AND TEST SYSTEMS

Slot 0 controllers support these register level transactions but also support message-based transactions. These transactions are ASCII commands following the standard commands for programmable instrumentation (SCPI) standard, accepted by the instrument industry in 1990. SCPI commands are very readable, although slower to interpret. The advantages of message-based transactions is easy and fast program development. The introduction of VXI plug-&-play drivers has superseded SCPI for easier, quicker programming. Slot 0 controllers require a relatively high-speed communication interface to communicate with an external computer. GP-IB, known as IEEE-488, and Firewire, known as IEEE-1394, are the standard interfaces for Slot 0 controllers. The external computer must have a matching communication interface to the Slot 0 controller. Firewire interfaces are a computer-industry standard and are often built into PCs as standard equipment. Basic Firewire software-driver support was included in Windows 98 from Microsoft. GP-IB has been an instrument communication standard since 1975. Because it is not a computer standard, a plug-in interface board

FIGURE 25.6.5 VXI switch module and termination blocks for wiring, C-size. (Photo courtesy of Agilent Technologies)

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.73

STANDARDS-BASED MODULAR INSTRUMENTS STANDARDS-BASED MODULAR INSTRUMENTS

25.73

must be added to the external PC for connection to the Slot 0 controller. GP-IB interfaces are about five times slower than Firewire. VXI Embedded Computers. Embedded controllers are available for C-size main-frames. Products with Intelarchitecture processors running Windows 3.11/ 95/98/NT are available from several vendors. VXI Measurement Modules. VXI offers the broadest and most complete line of instrumentation modules (see Fig. 25.6.5 for an example of a VXI measurement module). Like the other modular standards, VXI offers a full line of digital I/O modules and modules containing ADCs and DACs. These functions are commonly used for data-acquisition and industrial-control applications. Switching Modules. VXI also has the broadest line of switching modules among the modular instrument standards (see Fig. 25.6.5 for an example of a switching module). Switching can be used in many different ways and include simple relays, multiplexers and matrices, RF/microwave switches, and analog/FET switches. VXI Software The VXI standard has emphasized system software as an important element to improve the interoperability of modules from multiple vendors and to provide system developers a head start in developing their software. The VXIplug&play Systems Alliance developed a series of system frameworks encompassing both hardware and software. Six frameworks have been defined:

• • • • • •

Windows 3.1 Windows 95 Windows NT HP-UX Sun GWIN

In common with all of the frameworks is a specification for basic instrument communication called virtual instrument software architecture (VISA). It is a common communications interface regardless of whether the physical interconnect is GP-IB, Ethernet, or Firewire. Also in common among the frameworks is a specification for instrument drivers. Instrument drivers are functions, which are called from programming languages to control instruments. VXIplug&play drivers must include four features:

• • • •

C function library files Interactive soft front panel Knowledge base file Help file

The C function library files must include a dynamic link library (.DLL or .SL), ANSI C source code, and a function panel file (.FP). The function must use the VISA I/O library for all I/O functions. The interactive softfront panel is a graphical user interface for directly interacting with a VXI instrument module. Some front panels closely resemble the front panel of a traditional box instrument. The knowledge base file is an ASCII description of all the instrument module’s specifications. The help file provides information on the C function library, on the soft front panel, and on the instrument itself. VXI Standard Specifications Because the intent of the VXI standard was to create a standard specifically for instruments, several extensions were made to the VME specification. These extensions include additions to the backplane bus, and power, cooling, and RFI specifications.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.74

STANDARDS-BASED MODULAR INSTRUMENTS 25.74

INSTRUMENTATION AND TEST SYSTEMS

Unique VXI Signals The A-board size has only one connector (P1) and has no backplane extensions over VME. The B-, C-, and Dsize boards have at least a second connector (P2). On P2 are the following additions: Additional supply voltages to support analog circuits: −5.2 V, −2 V, +24 V, and −24 V. Additional pins were also added for an increase in the maximum current capacity of the +5 V supply. 10-MHz differential ECL clock for synchronizing several modules. Two parallel ECL trigger lines. Eight parallel TTL trigger lines. A module identification signal. A 12-line local bus that connects adjacent modules. The manufacturer of the module defines the functionality of these lines. • An analog summing bus terminated in 50 Ω.

• • • • • • •

Trigger Lines The TTL and ECL trigger lines are open collector lines used between modules for trigger, handshake, clock, and logic state communications. Several standard protocols have been defined for these lines, including synchronous (SYNC), asynchronous (ASYNC), and start/stop (STST). Module Power The VXIbus specification has set standards for mainframe and module power, mainframe cooling, and electromagnetic compatibility between modules. This ensures that products from multiple vendors will operate together. Cooling Specification The mainframe and module-cooling specification focuses on the test method for determining whether proper cooling will be available in a system.

PERSONAL COMPUTER PLUG-INS (PCPIS) Since the IBM PC was introduced in the mid-1970s, a number of companies have designed products to plug into the open slots of the PC motherboard. Three standards have defined those open slots. They are the ISA (Industry Standard Architecture), EISA (Extended Industry Standard Architecture), and PCI (Peripheral Component Interconnect) standards. EISA is an extension of ISA. All three were defined by the computer industry and none have support for instrumentation. In 1994, a group of industrial computer vendors formed a consortium to develop specifications for systems and boards used in industrial computing applications. They called themselves PICMG (PCI Industrial Computer Manufacturers Group). Today, it includes more than 350 vendors. The group’s first specification defined passive backplane computers. PICMG 1.0 PCI-ISA Passive Backplane was adopted in October 1994. A second specification, PICMG 1.1 PCI-PCI Bridge Board, was adopted in May 1995.

PCPI Common Functions AND Applications The most common applications are data-acquisition, industrial-control, and custom electronic-control systems.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.75

STANDARDS-BASED MODULAR INSTRUMENTS STANDARDS-BASED MODULAR INSTRUMENTS

25.75

FIGURE 25.6.6 PC plug-in: Multifunction data acquisition module, 16 analog inputs, 2 analog outputs, 24 digital I/O. (Photo courtesy of ComputerBoards, Inc)

PCPI System Components Two different configurations are common with PC plug-in measurement systems. The first configuration consists simply of a personal computer and a few measurement modules. The second configuration comprises a passive backplane, a single-board computer, and several measurement modules. This latter approach can also be referred to as an industrialized PC. In both cases, the measurement modules are the same (Fig. 25.6.6). PCPI Frames. In the simplest and most common configuration, the personal computer comprises the backplane, power supply, and packaging for the measurement system. System builders can choose a variety of PC form factors—from desktops to server towers. PC backplanes usually provide seven or eight slots. After the installation of standard PC peripherals, only a couple of slots are free for instrumentation. Extender frames are available, but not common. This type of PC plug-in system tends to be small, where only a few instrument functions are needed. PCPI Measurement Modules. PC plug-in instrument functions are primarily digital I/O modules and modules containing ADCs and DACs. In addition, there is a small number of basic instruments including oscilloscopes, digital multimeters, and pulse/function generators.

COMPACTPCI CompactPCI is a derivative of the Peripheral Component Interconnect (PCI) specification from the personalcomputer industry. CompactPCI was developed for industrial and embedded applications, including real-time data acquisition and instrumentation. The specification is driven and controlled by PICMG, the same group covered in PC plug-ins. The CompactPCI specification was released in November 1995. To support the needs of generalpurpose instrumentation, a PICMG subgroup developed a CompactPCI instrumentation specification, called PXI (PCI eXtended for Instrumentation). The first public revision of PXI was released in August 1997. The PXI specification continues to be developed and maintained by the PXI Systems Alliance (PXISA), a separate industry group.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.76

STANDARDS-BASED MODULAR INSTRUMENTS 25.76

INSTRUMENTATION AND TEST SYSTEMS

CompactPCI Specification Similar to VME, the CompactPCI specification defines backplane functionality, electrical specifications, and mechanical specifications. The functionality is the same as the PCI bus, as defined by the PCI local bus specification, but with some additions. These additions provide pushbutton reset, power-supply status, system-slot identification, and legacy IDE interrupt features. A unique feature of CompactPCI is Hot Swap, the ability to insert or remove modules while power is applied. This is an extension to the core specification.

CompactPCI System Components A CompactPCI measurement system usually consists of one or more frames, an embedded computer module, and various measurement modules. Frames. Frames refer to the backplane, power supply, and packaging that encloses a system. System builders can choose a solution at many different levels from the component level (i.e, a backplane to complete powered desktop enclosures). CompactPCI backplanes are available with two, four, six, or eight slots. It is possible to go beyond eight slots using a bridge chip on the frame or a bridge card. Embedded Computers. The PCI bus is commonly used in many computers, from personal computers to high-end workstations. For that reason, a large number of CompactPCI embedded processors are available. Measurement Modules. Instrument functions in CompactPCI are primarily digital I/O modules and modules containing ADCs and DACs. In addition, there is a small but growing number of traditional instruments, including oscilloscopes, digital multimeters, and serial data analyzers. Switching Modules CompactPCI, specifically PXI, includes a wide range of switching products, including simple relays, multiplexers and matrices, RF switches, and FET switches. Compact PCI Software. PXI adopted as part of its specification many of the features of VXI plug&play software. It adopted software frameworks for Windows 95 and Windows NT. These frameworks are required to support the VISA I/O standard.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.77

Source: STANDARD HANDBOOK OF ELECTRONIC ENGINEERING

CHAPTER 25.7

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS Tim Mikkelsen

INTRODUCTION All but the simplest electronic instruments have some form of embedded computer system. More and more computing is being embedded in the instrument because of reductions of computing cost and size. This is increasing computing power as well as increasing the number of roles for computing in the instrument domain. Consequently, systems that were previously a computer or PC and an instrument are now just an instrument. This transition is happening because of the demand for functionality, performance, and flexibility in instruments and also because of the low cost of microprocessors. Embedded computers are almost always built from microprocessors or microcontrollers. In fact, the cost of microprocessors is sufficiently low and their value sufficiently high, hence most instruments have more than one embedded computer.

Embedded Computer Model The instrument and its embedded computer normally interact with four areas of the world: the measurement, the user, peripherals, and external computers. The instrument needs to receive measurement input and/or send out source output. A source is defined as an instrument that generates or synthesizes signal output. An analyzer is an instrument that analyzes or measures input signals. These signals can consist of analog and/or digital signals. The front end of the instrument is the portion of the instrument that conditions, shapes, and modifies the signal to make it suitable for acquisition by the analog-to-digital converter. The instrument normally interacts with the user of the measurement. The instrument also generally interacts with an external computer that is usually connected for control or data-connectivity purposes. Finally, the instrument is often connected to local peripherals, primarily for printing and storage. Figure 25.7.1 is an example of an embedded computer. A generalized block diagram for the embedded computers in an instrumentation environment appears in Fig. 25.7.2. The embedded computer is typically involved in the control and transfer of data via external interfaces. This enables the connection of the instrument to external PCs, networks, and peripherals. Examples include local area networks (LAN), IEEE 488 (also known as GPIB or HP-IB), RS-232 (serial), Centronics (parallel), Universal Serial Bus (USB), and IEEE 1394 (Firewire). The embedded computer is also typically involved in the display to and input from the user. Examples include keyboards, switches, rotary pulse generators (RPGs, i.e, knobs), LEDs (single or alpha-numeric displays), LCDs, CRTs, and touch screens.

25.77 Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.78

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS 25.78

INSTRUMENTATION AND TEST SYSTEMS

FIGURE 25.7.1 Typical of the embedded computers on the marketplace is the Agilent Technologies E1498A Embedded Controller. This single-slot, Csized message-based computer was developed specifically for VXI.

FIGURE 25.7.2 An embedded computer generalized block diagram.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.79

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS

25.79

Many instruments often have a large amount of configuration information because of their advanced capabilities. The embedded computer enables saving and recalling of the instrument state. Also, the embedded computer is used sometimes for user customization of the instrument. This can range from simple configuration modifications to complete instrument programmability. With very powerful embedded computers available in instruments, it is often unnecessary to connect the instrument to an external computer for more-advanced tasks. Examples include go/no-go, also known as pass/fail testing and data logging. The embedded computer almost always performs calculations, ranging from very simple to very complex, which convert the raw measurement data to the target instrument information for measurement or vice versa for source instruments. The embedded computer generally controls the actual measurement process. This can include control of a range of functions such as analog-to-digital conversion, switching, filtering, detection, and shaping. The embedded computer is almost always used to perform at least a small amount of self-testing. Most instruments use embedded computers for more extensive calibration tests.

BENEFITS OF EMBEDDED COMPUTERS IN INSTRUMENTS In addition to the direct uses of embedded computers, it is instructive to think about the value of an embedded computer inside an instrument. The benefits occur throughout the full life cycle of an instrument, from development through maintenance. One of the biggest advantages of embedding computers within an instrument is that they allow several aspects of the hardware design to be simplified. In many instruments, the embedded computer participates in acquisition of the measurement data by servicing the measurement hardware. Embedded computers also simplify the digital design by providing mathematical and logical manipulations, which would otherwise be done in hardware. They also provide calibration both through numerical manipulation of data and by controlling calibration hardware. This is the classic transition of function from hardware to software. The embedded computer allows for lower manufacturing costs through effective automated testing of the instrument. They are also a benefit because they allow for easier and lower-cost defect fixes and upgrades (with a ROM or program change). When used as a stand-alone instrument, embedded computers can make the setup much easier by providing online help or setup menus. This also includes automatic or user-assisted calibration. Although many are stand-alone instruments, a large number are part of a larger system. The embedded computers often make it easier to connect an instrument to a computer system by providing multiple interfaces and simplified or automatic setup of interface characteristics. Support Circuitry Although requirements vary, most microprocessors require a certain amount of support circuitry. This includes the generation of a system clock, initialization hardware, and bus management. In a conventional design, this often requires two or three external integrated circuits (ICs) and five to ten discrete components. The detail of the design at this level depends heavily on the microprocessor used. In complex or high-volume designs, an application-specific integrated circuit (ASIC) can be used to provide much of this circuitry. Memory The microprocessor requires memory both for program and data store. Embedded computer systems usually employ both ROM and RAM. Read-only memory (ROM) retain their contents if power is removed. Random access memory (RAM) is a historical but inadequate term that really refers to read/write memory— memory whose contents can be changed. RAM memory is volatile; it will lose its contents when power is no longer applied. RAM is normally implemented as either static or dynamic devices. Static memory is a type of electrical circuit that will retain its data, with or without access, as long as power is supplied. It is normally built with latches or flip-flops.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.80

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS 25.80

INSTRUMENTATION AND TEST SYSTEMS

Dynamic memory is built out of a special type of circuit that requires periodic memory access (every few milliseconds) to refresh and maintain the memory state. It uses a switched capacitor for the storage element. This is handled by memory controllers and requires no special attention by the developer. The advantage of the dynamic memory RAM is that it consumes much less power and space. ROM is used for program storage because the program does not usually change after power is supplied to the instrument. A variety of technologies are used for ROM in embedded applications. Nonvolatile memory. Some instruments are designed with special nonvolatile RAM that retains its contents after power has been removed. This is necessary for storing such information as calibration and configuration data. This can be implemented with regular RAM memory that has a battery backup. It can also be provided by special nonvolatile memory components, most commonly, flash memory devices. Flash memory is a special type of EEPROM that uses block transfers—instead of individual bytes—and has a fairly slow, in computer terms, write time. So, it is not useful as a general read/write memory device, but is perfect for nonvolatile memory purposes. Also, a limited number of writes are allowed (on the order of 10,000). All embedded systems have either a ROM/RAM or a flash/RAM memory set so that the system will be able to operate the next time the power is turned on.

INSTRUMENT HARDWARE Given that the role of these microprocessors is instrumentation (measurement, analysis, synthesis, switches, and the like), the microprocessor needs to have access to the actual hardware of the instrument. This instrument hardware is normally accessed by the microprocessor like other peripheral components such as registers or memory locations. Microprocessors frequently interface with the instruments’ analog circuits using analogto-digital converters (ADCs) and digital-to-analog converters (DACs). In an analog instrument, the ADC bridges the gap between the analog domain and the digital domain. In many cases, substantial processing is performed after the input has been digitized. Increases in the capabilities of ADCs enable the analog input to be digitized closer to the front end of the instrument, allowing a greater portion of the measurement functions to occur in the embedded computer system. This has the advantages of providing greater flexibility and eliminating errors introduced by analog components. Just as ADCs are crucial to analog measuring instruments, DACs play an important role in the design of source instruments such as signal generators. They are also very powerful when used together. For example, instruments can have automatic calibration procedures where the embedded computer adjusts an analog circuit with a DAC and measures the analog response with an ADC.

PHYSICAL FORM OF THE EMBEDDED COMPUTER Embedded computers in instruments take one of three different forms: a separate circuit board, a portion of a circuit board, or a single chip. In the case of a separate circuit board, the embedded computer is a boardlevel computer that is a circuit board separate from the rest of the measurement function. An embedded computer that is a portion of a circuit board contains a microprocessor, its associated support circuitry and some portion of the measurement functions on the same circuit board. A single-chip embedded computer can be a microcontroller, digital signal processor, or microprocessor core with almost all of the support circuitry built into the chip.

Digital Signal Processor (DSP) A DSP is a special type of microcontroller that includes special instructions for digital signal processing, allowing it to perform certain types of mathematical operations very efficiently. These math operations are primarily multiply and accumulate (MAC) functions, which are used in filter algorithms.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.81

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS

25.81

Microprocessor Cores These are custom microprocessor, IC segments, or elements that are used within custom-designed ICs. Assume an instrument designer has a portion of an ASIC that is the CPU core. The designer can then integrate much of the rest of the system, including some analog electronics, on the ASIC, creating a custom microcontroller. This approach minimizes size and power. In very high volumes, the cost can be very low. However, these chips are intended for very specific applications and are generally difficult to develop. Architecture of the Embedded Computer Instrument Just as an embedded computer can take a variety of physical forms, there are a number of ways to configure an embedded computer instrument. The architecture of the embedded computer has significant impact on many aspects of the instrument including cost, performance, ease of development, and expandability. The range of choices include:

• • • • • •

Peripheral-style instruments (externally attached to a PC) PC plug-in instruments (circuit boards inside a PC) Single-processor instruments Multiple-processor instruments Embedded PC-based instruments (where the embedded computer is a PC) Embedded workstation-based instruments.

EMBEDDED COMPUTER SYSTEM SOFTWARE As stated earlier, the embedded computer in an instrument requires both hardware and software components. Embedded computer system software includes:

• Operating system—the software environment that the instrument applications run within • Instrument application—the software program that performs the instrument functions on the hardware • Support and utility software—additional software the user of the instrument requires to configure, operate, or maintain the instrument such as reloading or updating system software, saving and restoring configurations.

USER INTERFACES Originally, instruments used only panel-mounted, direct controls that were connected directly to the analog and digital circuits. As embedded computers became common, instruments began employing menu or keypad-driven systems, in which the user input was read by the computer, which then modified the circuit operation. Today, designs have progressed to the use of graphical user interfaces (GUls). Although some instruments are intended for automated use or are faceless (have no user interface), most need some way for the user to interact with the measurement or instrument. All of these user interface devices can be mixed with direct control devices such as meters, switches, and potentiometers/knobs. There are a variety of design challenges in developing effective user interfaces in instruments.

EXTERNAL INTERFACES Most instruments include external interfaces to a peripheral, another instrument device, or to an external computer. An interface to a peripheral allows the instrument to use the external peripheral, normally printing or plotting the measurement data. An interface to another device allows the instrument to communicate with or

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.82

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS 25.82

INSTRUMENTATION AND TEST SYSTEMS

control another measurement device. The computer interface provides a communication channel between the embedded computer and the external computer. This allows the user to:

• • • •

Log (capture and store) measurement results Create complex automatic tests Combine instruments into systems Coordinate stimulus and response between instruments

The external computer accomplishes these tasks by transferring data, control, setup, and/or timing information to the embedded computer. At the core of each of these interfaces is a mechanism to send and receive a stream of data bytes. Hardware Interface Characteristics. Each interface that has been used and developed has a variety of interesting characteristics, quirks, and trade-offs. However, external interfaces have some common characteristics to understand and consider:

• • • •

Parallel or serial—How is information sent (a bit at a time or a byte at a time)? Point to point or bus/network—How many devices are connected via the external interface? Synchronous or asynchronous—How are the data clocked between the devices? Speed—What is the data rate?

Probably the most fundamental characteristic of hardware interfaces is whether they send the data stream one bit at a time, serial, or all together, parallel. Most interfaces are serial. The advantage of serial is that it limits the number of wires down to a minimum of two lines (data and ground). However, even with a serial interface, additional lines are often used (transmitted data, received data, ground, power, request to send, clear to send, and so forth). Parallel interfaces are normally 8 bit or 16 bit. Some older instruments had custom binary-coded decimal (BCD) interfaces, which usually had six sets of 4-bit BCD data lines. Parallel interfaces use additional lines for handshaking-explicit indications of data ready from the sender and ready for data from the receiver.

TABLE 25.7.1 Common Instrument Software Protocols Software protocol The IEEE 488.2

SCPI

TCP/IP

FTP HTTP VXI-11

Description The IEEE 488.2, Codes, Formats, Protocols and Common Commands for Use with IEEE 488.1 is a specification that defines: 39 common commands and queries for instruments, the syntax for new commands and queries, and a set of protocols for how a computer and the instrument interact in various situations. Although this is a companion to the IEEE 488.1 interface, it is independent of the actual interface, but it does depend on certain interface characteristics. The Standard Commands for Programmable lnstruments (SCPI) is a specification of common syntax and commands so that similar instruments from different vendors can be sent the same commands for common operations. It also specifies how to add new commands that are not currently covered in the standard. The Transmission Control Protocol/lnternet Protocol (TCP/∼P) specification is the underlying protocol used to connect devices over network hardware interfaces. The network hardware interface can be a LAN or a WAN. The File Transfer Protocol (FTP) specification is a protocol used to request and transfer files between devices over network hardware interfaces. The HyperText Transfer Protocol (HTTP) specification is a protocol used to request and transfer Web (HyperText Markup Language, HTML) pages over network hard\∼are interfaces. The VXI-11 plug-and-play specification is a protocol for communicating with instruments that use the VXI-bus (an instrument adaptation of the VME bus), GPIB/HP-IB, or a network hardware interface.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.

Christiansen_Sec_25.qxd

10/28/04

12:20 PM

Page 25.83

EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS EMBEDDED COMPUTERS IN ELECTRONIC INSTRUMENTS

25.83

SOFTWARE PROTOCOL STANDARDS The software protocol standards listed in Table 25.7.1 provide the hardware interface between devices and computers. The physical layer is necessary, but not sufficient. To actually exchange information, the devices (and/or computers) require defined ways to communicate, called protocols. If a designer is defining and building two or more devices that communicate, it is possible to define special protocols (simple or complex). However, most devices need to communicate with standard peripherals and computers that have predefined protocols that need to be supported. The protocols can be very complex and layered (one protocol built on top of another). This is especially true of networked or bus devices.

USING EMBEDDED COMPUTERS Instruments normally operate in an analog world. It has characteristics, such as noise and nonlinearity of components, that introduce inaccuracies in the instrument. Instruments generally deal with these incorrect values and try to correct them by using software in the embedded computer to provide calibration (adjusting for errors inside the instrument) and correction (adjusting for errors outside of the instrument).

Using Instruments That Contain Embedded Computers In the process of selecting or using an instrument with an embedded computer, a variety of common characteristics and challenges arise. This section covers some of the common aspects to consider:

• Instrument customization—What level of instrument modification or customization is needed? • User access to the embedded computer—How much user access to the embedded computer as a generalpurpose computer is needed?

• Environmental considerations—What is the instrument’s physical environment? • Longevity of instruments—How long will the instrument be in service?

User Access to an Embedded Computer As described earlier, many instruments now use an embedded PC or workstation as an integral part of an instrumentation system. However, the question arises: “Is the PC or workstation visible to the user?” This is also related to the ability to customize or extend the system. Manufacturers realize benefits from an embedded PC because there is less software to write and it is easy to extend the system (both hardware and software). Manufacturers realize these benefits—even if the PC is not visible to the end user. If the PC is visible, users often prefer an embedded PC because it is easy to extend the system, the extensions (hardware and software) are less expensive, and they don’t require a separate PC. However, there are problems in having a visible embedded PC. For the manufacturer, making it visible exposes the internal architecture. This can be a problem because competitors can more easily examine their technologies. Also users can modify and customize the system. This can translate into the user overwriting all or part of the system and application software. This is a serious problem for the user, but is also a support problem for the manufacturer. Many instrument manufacturers who have faced this choice have chosen to keep the system closed, and not visible to the user, because of the severity of the support implications. The user or purchaser of an instrument has a choice between an instrument that contains a visible embedded PC and an instrument that is just an instrument, independent of whether it contains an embedded PC. It is worth considering how desirable access to the embedded PC is to the actual user of the instrument. The specific tasks that the user needs to perform using the embedded PC should be considered carefully. Generally, the instrument with a visible embedded PC is somewhat more expensive.

Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com) Copyright © 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.