The War of Information

The literature on strategic experimentation (Harris and Bolton (1999, 2000), Cripps, ... and Bolton (1999); the signal takes the form of a Brownian motion with ...
760KB taille 3 téléchargements 412 vues
The War of Information†

Faruk Gul and Wolfgang Pesendorfer Princeton University February 2007

Abstract Two advocates with opposing interests provide costly information to a voter who must choose between two policies. Information flows continuously and stops when no advocate is willing to incur the cost of its provision. Equilibrium is unique if players are symmetrically informed. In this equilibrium, an advocate’s probability of winning is decreasing in his cost of information provision. For the voter, the advocates’ actions are complements and, as a result, the voter may benefit from increasing the low-cost advocate’s information provision cost with a tax. We also consider the case of an informed and an uninformed advocate and show that in this case a signaling barrier, a belief-threshold where unfavorable information about the informed advocate has no effect voter beliefs, emerges.



Financial support from the National Science Foundation is gratefully acknowledged.

1.

Introduction A new policy is proposed by a political party, for example, a new plan to organize

health care. Interest groups in favor and opposed to the plan provide information to convince voters of their respective positions. This process continues until until polling data suggest that voters decisively favor or oppose the new policy. The health care debate during the Clinton administration and the debate on social security during the Bush administration are prominent examples of this pattern. In this paper, we analyze a model of competitive advocacy that capture salient features of such political campaigns. Our model assumes that advocates provide hard and unbiased information. Our analysis focuses on the trade-off between the cost of information provision and the probability of convincing the median voter. Advocates in actual campaigns often distort facts or at a minimum try to present them in the most favorable light. Voters are aware of these tendencies and presumably try to correct for them. Our analysis abstracts from these issues and instead focuses on the strategic interaction between two competing advocates who have no choice but to provide unbiased information. The underlying uncertainty is about the preferences of the median voter. Specifically, we assume that there are two states, one in which the median voter prefers advocate 1’s policy and one in which he prefers advocate 2’s policy. We first analyze the case of symmetric information. All players are uncertain about the voter’s preferences and learn as information about the policies is revealed. Hence, we assume that the updated beliefs of the median voter are observable to all players. Underlying this assumption is the idea that advocates take frequent opinion polls that inform them of the median voters beliefs. Alternatively, players may learn about an objective state that determines the preference of the median voter. We model the flow of information as a continuous-time process. As long as one of the advocates provides information, all players observe an informative signal that takes the form of a Brownian motion with unit variance and a state-dependent drift. Advocates must bear the cost of information provision and the game stops when no advocate is willing to incur that cost. At that point, the median voter picks his preferred policy based on his posterior beliefs. We refer to this game as the “war of information.” 1

The war of information differs from a war of attrition in two ways. First, in a war of information, players can temporarily quit providing information (when they are ahead) and resume at a later date. In a war of attrition both players must bear a cost for the duration of the game. Second, the resources spent during a war of information generate a payoff relevant signal. If the signal were uninformative and both players incurred the cost for the duration of the game then the war of information reduces to a war of attrition with a public randomization device. We show that the war of information has a unique subgame perfect equilibrium. In that equilibrium each advocate chooses a belief threshold and stops providing information if the posterior belief (of the median voter) is less favorable (from the perspective of the advocate) than the belief threshold. Let pt denote the posterior belief that advocate 1 offers the better policy. The voter prefers advocate 1’s policy if pt > 1/2 and 2’s policy if pt ≤ 1/2. Then, there are belief thresholds p1 < 1/2 < p2 such that advocate 1 provides information if pt ∈ [p1 , 1/2] and advocate 2 provides information if pt ∈ [1/2, p2 ]. The game ends at time t if pt = p1 or pt = p2 . In the latter case, the voter chooses policy 1 (advocate 1 wins) and in the former case the voter chooses policy 2. The equilibrium belief thresholds can be determined as the equilibrium outcomes of a simple static game, the simple war of information. The simple war of information is formally equivalent to a Cournot duopoly game with a unique Nash equilibrium. Viewed as a game between two advocates, the simple war of information is a game of strategic substitutes. A higher cost of advocate i implies a less aggressive threshold of i and a more aggressive threshold of j. An advocate with a low cost of information provision is more likely to win the political contest for two reasons. First, the lower cost will imply that the advocate chooses a more aggressive belief threshold. Second, the advocate’s opponent will choose a less aggressive belief threshold. Both these effect will increase the probability that the low cost advocate wins the campaign. The voter’s equilibrium utility depends on the informativeness of the campaign. A very informative campaign allows the voter to correctly identify the state while an uninformative campaign forces the voter to make a decision with little information. From the voter’s perspective the thresholds of advocates are complements: a more aggressive advocate 2

raises the voter’s marginal benefit from a more aggressive threshold of his opponent. Hence, voters are best served by campaigns that are balanced. If one advocate has a very high cost of information provision then the fact that the other advocate has a very low cost is of little benefit to the voter. Next, we examine the effect of raising the costs of advocates on voter utility. In our model, the cost represents advocates’ flow-cost of providing information. For example, this could be the cost of raising funds during an election campaign. Election laws in the US limit the amount of money an individual donor can give to a campaign. We can interpret this as raising the cost of campaigning. Consider the case of two advocates with an equally large group of supporters. If the supporters of advocate 1 are wealthier than the supporters of advocate 2 then advocate 1 has a lower cost of campaigning. Moreover, limitations on the maximum donation will disproportionably affect advocate 1. Hence, we can interpret US campaign finance regulations as raising the cost of the low-cost advocate. Propositions 3 and 4 ask under what conditions raising advocates’ costs can be beneficial for the voter. Raising the cost of advocate 1 raises the threshold of both advocates. Advocate 1 becomes less aggressive (p1 moves closer to 1/2) and advocate 2 becomes more aggressive (p2 moves away from 1/2). For a fixed cost of advocate 2 there is a threshold such that when 1’s cost is below the threshold it is beneficial to raise his cost while if his cost is above the threshold then it is harmful to raise his cost. It is never beneficial to raise the cost of the high cost advocate and so this threshold is below advocate 2’s cost. Hence, when costs are sufficiently asymmetric then a tax on the low cost advocate raises the utility of the voter. It may not always feasible to discriminate between advocates. Proposition 5 asks whether a tax on both advocates can be beneficial. We show that when the asymmetry between candidates is sufficiently large then raising the cost of both advocates is beneficial for the voter. These results provide a rational for regulations that limit campaign spending even when campaigns offer undistorted hard information that is useful to voters. In particular, such regulations can raise the utility of voters when there is an asymmetry between advocates in their cost of information provision. 3

The war of information can also be used to analyze court proceedings. In this interpretation, player 3 is a judge. Advocates try to convince the judge to rule in their favor. During a trial, advocates expend resources to generate information. In this context, we may interpret the judge’s decision rule as the burden of proof for one of the advocates. We interpret the burden of proof in a trial as a policy variable that can be chosen ex ante. We ask how the burden of proof should be chosen if the objective is to maximize the informativeness of a trial for a symmetric objective function. Suppose the payoff is 1 if the advocate with the correct position wins and zero otherwise. The burden of proof specifies a threshold γ such that advocate 1 wins if the judge’s posterior at the end of the trial is above γ and advocate 2 wins if this posterior is below γ. We characterize the optimal burden of proof and show that it must favor the high cost advocate. Moreover, if ex ante both advocates are equally likely to be correct then at the optimal burden of proof the high cost candidate is more likely to win. Hence, the optimal burden of proof more than offsets the disadvantage of the high cost candidate. Section 5 considers the war of information with asymmetric information where advocate 1 knows the state but advocate 2 and the voter do not. For example, player 1 may advocates the adoption of a new technology whose safety is a concern to voters. If advocate 1 is of type 0 then the technology is unsafe (and voters prefer not to adopt it) and if advocate 1 is type 1 then the technology is safe and voters prefer to adopt it. We first analyze the model under the assumption that only the informed advocate provides information. This corresponds to the situation where advocate 2 has infinite costs. In the game with asymmetric information the actions of the (informed) advocate may also signal his type. As is typical in signaling games, this leads to a multiplicity of equilibria. We assume that the voter’s beliefs about the advocate’s type are monotone in the sense that providing information cannot be a signal of type 0 (the unsafe technology). This rules out equilibria where the decision to quit (by both types) is supported by offequilibrium belief that if information is provided the technology is unsafe. We show that there is a unique equilibrium with monotone beliefs. In that equilibrium the informed advocate who knows the technology is safe (type 1) never quits. The advocate who knows the technology is unsafe provides information above a belief threshold p1 . When 4

the belief reaches the threshold p1 then type 0 mixes. He drops out at a rate that ensures that further evidence of an unsafe technology is exactly compensated by the probability of dropping out. Hence p1 acts as a signaling barrier, i.e., beliefs can never drop below p1 . A consequence of the signaling barrier is that even after a large amount of “hard” information is revealed suggesting that the technology is unsafe, the voter’s posterior must remain above the signaling threshold. The signaling barrier implies an asymmetric response of voters to information. Information unfavorable to the informed advocate is discounted while favorable information is not. Therefore, the voter may choose in favor of the informed advocate even if in a non-strategic situation he would not. That is, even if the total public information suggests that the advocate holds the incorrect position. In the case where both advocates provide information an there is an analogous equilibrium with a signaling barrier on the side of the informed advocate. We show that in the model with asymmetric information, the voter does not benefit from an increase in the advocates’ costs: an increase in the cost of the informed advocate does not affect the voter’s utility while an increase in the cost of the uninformed advocate reduces the voter’s utility. 1.1

Related Literature The war of information is similar in structure to models of contests (Dixit (1987), and

rent seeking games (Tullock (1980)). The key difference is that in a war of information the resources are spent to generate decision-relevant information for the voter/judge. The literature on strategic experimentation (Harris and Bolton (1999, 2000), Cripps, Keller and Rady (2005)) analyzes situations where agents must incur costs to learn the true state but can also learn from the behavior of others. This leads to a free-riding problem that is the focus of this literature. The information structure in our paper is similar to Harris and Bolton (1999); the signal takes the form of a Brownian motion with unknown drift.1 However, the incentive problem analyzed in the war of information differs. Advocates in the war of information would like to deter opponents from providing information and 1

See also Moscarini and Smith (2001) for an analysis of the optimal level of experimentation in a decision problem where information is modelled as a Brownian motion with unknown drift.

5

therefore benefit from a low cost beyond the direct cost saving. In a model of strategic experimentation, agents have an incentive to free-ride on other players and therefore would like to encourage opponents to provide information. Yilankaya (2002) provides a model of evidence production and an analysis of the optimal burden of proof. The model assumes an informed defendant, an uninformed prosecutor and an uninformed judge. This corresponds to our setting with asymmetric information. Yilankaya’s model is static; that is, advocates commit to a fixed expenditure at the beginning of the game. Yilankaya explores the trade-off between an increased burden of proof and increased penalties for convicted defendants. He shows that an increased penalty may lead to larger errors, i.e., a larger probability of convicting innocent defendants or acquitting guilty defendants. In our model, an increased penalty for a convicted defendant is equivalent to a lower cost of information provision for the defendant. If the defendant is informed then in our case this has no effect on the probability of convicting an innocent defendant or acquitting a guilty defendant.

2.

The Simple War of Information The War of Information is a three-person, continuous-time game. We refer to players

1 and 2 as advocates and player 3 as the voter. Nature endows one of the two advocates with the correct position. Then, the advocates decide whether or not to provide information about their positions. Once the flow of information stops, the voter makes a decision in favor of one of the two advocates. The voter’s payoff is 1 if he chooses the advocate with the correct position and 0 otherwise. An advocate receives a payoff of 1 if his policy is chosen by the voter and 0 otherwise. Furthermore, advocate i incurs a flow cost ki /4 while providing information. Let pt denote the probability that the voter assigns at time t to advocate i having the correct position and let T denote the time at which the flow of information stops. Note that it is optimal for the voter to decide in player 1’s favor if and only if pT ≥ 1/2. The functions Ii : [0, 1] → [0, 1], i = 1, 2 are defined as I1 (x) = x and I2 (x) = 1 − x. We say that player i = 1, 2 is trailing at time t if Ii (pt ) < 1/2 6

(1)

Hence, player 1 is trailing if pt < 1/2 and player 2 is trailing if pt > 1/2. We assume that only the player who is trailing at time t provides information. We discuss this assumption in detail at the end of section 3. The equilibria analyzed below remain equilibria when players are allowed to provide information when ahead. Hence, the game stops whenever the trailing player quits. We say that the game is running at time t, if at no τ ≤ t a trailing player has quit. As long as the game is running, all three player observe the process X where Xt = µt + Zt

(2)

and Z is a Wiener process. Hence, X is a Brownian motion with drift µ and variance 1. We set X0 = 0 and assume that all three players do not know µ and assign probability 1/2 to each of the two outcomes µ = 1/2 and µ = −1/2. The first of these outcomes is identified with advocate 1 holding the correct position, while µ = −1/2 means that advocate 2 holds the correct position. Let p(x) =

1 1 + e−x

(3)

for all x ∈ IR; for x = −∞, we set p(x) = 0 and for x = ∞, we set p(x) = 1. A straightforward application of Bayes’ Law yields pt := Prob{µ = 1/2 | Xt } = p(Xt ) and therefore, i is trailing if and only if (−1)i−1 Xt < 0

(4)

Hence, by incurring the cost of providing information, the trailing advocate gets a chance to catch up. In this section, we restrict both advocates to stationary, pure strategies. We call the resulting game the simple war of information. In the next section, we will show that this restriction is without loss of generality. A stationary pure strategy for player 1 is a real number y1 < 0 (y1 = −∞ is allowed) such that player 1 quits providing information as soon as X reaches y1 . That is, player 1 provides information as long as Xt > y1 and stops 7

at inf{t | Xt = y1 }. Similarly, a stationary pure strategy for player 2 is an extended real number y2 > 0 such that player 2 provides information if and only if 0 < Xt < y2 and stops as soon as Xt = y2 (t). Let T = inf{t > 0 | Xt − yi = 0 for some i = 1, 2}

(5)

if {t | Xt = yi for some i = 1, 2} = ∅ and T = ∞ otherwise. Observe that the game runs until time T . At time T < ∞, player 3 rules in favor of player i if and only if XT = yj for j = i. If T = ∞, we let pT = 1 and assume that both players win.2 Let y = (y1 , y2 ) and let v(y) denote the probability that player 1 wins given the strategy profile y; that is, v(y) = Prob{pT > 1/2} More generally, the probability of player i winning is: vi (y) = Ii (v(y))

(6)

To compute the advocates’ cost associated with the strategy profile y, define C : [0, 1] → {0, 1} such that

 C(s) =

1 if s < 1/2 0 otherwise

then, the expected information cost of player i given the strategy profile y is ki E ci (y) = 4



T

Ii (C(pt ))dt

(7)

0

Note that the expectation is taken both over the possible realizations of µ and the possible realizations of W . Then, the advocates’ expected utilities are Ui (y) = vi (y) − ci (y)

(8)

while the voter’s expected utility, is: U3 (y) = E[max{pT , 1 − pT }]

(9)

This specification of payoffs for T = ∞ has no effect on the equilibrium outcome since staying in the game forever is not a best responses to any opponent strategy for any probability of winning at T = ∞. We chose this particular specification to simplify the notation and exposition. 2

8

It is more convenient to describe the behavior and payoffs of the advocates as functions of the following transformations of their strategies. Let   αi = (−1)i−1 1 − 2p(yi ) Hence, α1 = 1 − 2p(y1 ) ∈ [0, 1] and α2 = 2p(y2 ) ∈ [0, 1]. For both players, higher values of αi indicate a greater willingness to bear the cost of information provision. If αi is close to 0, then player i is not willing to provide much information; he quits at yi close to 1/2. Conversely, if αi = 1, then player i does not quit no matter how far behind he is (i.e., y1 = −∞ or y2 = ∞). Without risk of confusion, we write Ui (α), where α = (α1 , α2 ) ∈ (0, 1]2 in place of Ui (y). Lemma 1 below describes the payoffs associated with a stationary, pure strategy profile given optimal behavior of the voter: Lemma 1:

For any α = (α1 , α2 ), the payoffs for the three players are as follows: αi  1 + αi  1 − ki αj ln α1 + α2 1 − αi α1 α2 1 U3 (α) = + 2 α1 + α2 Ui (α) =

where i, j ∈ {1, 2}, j = i. If αi = 1, then Ui (α) = −∞. Lemma 2 below utilizes Lemma 1 to establish that the best response of player i to aj is well-defined, single valued, and differentiable. Furthermore, the simple war of information is dominance solvable. In section 3, we use this last fact to show that the war of information has a unique subgame perfect Nash equilibrium even if nonstationary strategies are permitted. The function Bi : (0, 1] → (0, 1] is advocate 1’s best response function if U1 (B1 (α2 ), α2 ) > U1 (α1 , α2 ) for all α2 ∈ (0, 1] and α1 = B1 (α2 ). Advocate 2’s best response function is defined in an analogous manner. Lemma 2 below establishes that best response functions are welldefined. Then, α1 is a Nash equilibrium strategy for advocate 1 if and only if it is a fixed-point of the mapping φ defined by φ(α1 ) = B1 (B2 (α1 )). Lemma 2 below ensures that φ has a unique fixed-point. 9

Lemma 2:

There exists differentiable, strictly decreasing best response functions Bi :

(0, 1] → (0, 1] for both advocates. Furthermore, if α1 ∈ (0, 1) is a fixed-point of φ, then 0 < φ (α1 ) < 1. Using Lemma 2, Proposition 1(i) below establishes that the simple war of information has a unique equilibrium. Proposition 1(ii) shows that as the cost of player i decreases, he becomes more aggressive while his opponent becomes less aggressive. The equilibrium strategy of player i converges to 0 as the cost of that player goes to infinity and converges to 1 as the cost goes to infinity. It follows that any strategy profile α ∈ (0, 1)2 can be attained for appropriate costs (k1 , k2 ). Proposition 1:

(i) The simple war of information has a unique Nash equilibrium. (ii)

The equilibrium strategy αi is strictly decreasing in ki and strictly increasing in kj . (iii) For every α ∈ (0, 1)2 there exist (k1 , k2 ) such that α is the equilibrium of the simple war of information with cost (k1 , k2 ). We have assumed that the states have equal prior probability. To capture a situation with an arbitrary prior π, we can choose the initial state X0 = x so that p(x) = π. The equilibrium strategies are unaffected by the choice of the initial state and hence if (α1 , α2 ) is the equilibrium for X0 = 0 then (α1 , α2 ) is also an equilibrium for X0 = x. If the initial prior is not equal to 1/2 (the threshold for player 3) then one of the advocates may quit immediately. In particular, let α = (α1 , α2 ) denote the equilibrium strategies of the game. If π≤

1 − α1 2

then player 1 gives up immediately and the payoff of player 3 is 1 − π > 1/2. Similarly, if π≥

1 + α2 2

then player 2 gives up immediately and the payoff of player 3 is π > 1/2. 2 To determine the value of the campaign for player 3 assume π ∈ [1/2, 1+α 2 ]. Without

the campaign, the payoff of the voter is π and hence the value of the campaign W is W = U3 − π = Pr(1 wins )

1 + α2 1 + α1 + Pr(2 wins ) −π 2 2 10

Note that player 1 wins if p(XT ) =

1−α1 2

and player 2 wins if p(XT ) =

1+α2 2 .

Since p(Xt )

is a martingale it follows that Pr(1 wins )

1 + α2 1 − α1 + (1 − Pr(1 wins )) =π 2 2

Substituting for the probability that 1 wins yields W =

α1 α1 α2 + (1 − 2π) α1 + α2 α1 + α2

The above expression illustrates the complementary value of the advocates’ actions for voters. If one advocate gives up very quickly (αi = 0) then campaigns have no social value. This is true, even if the campaign is informative, i.e., even if pT = π. The fact that the actions of advocates have complementary value for the voter suggests that voters prefer “balanced” campaigns where the costs of candidates are not too dissimilar. Our next results confirm this intuition. Call i the better advocate if ki < kj for j = i and call i the worse advocate if ki > kj for j = i. Proposition 2 shows that an increase in the cost of the worse advocate always reduces voter utility. By contrast, an increase in the cost of better advocate increases voter utility provided the asymmetry between the two advocates is sufficiently large. For the remainder of this section, we assume that π = 1/2 and denote with U3∗ (k1 , k2 ) denote the equilibrium payoff of player 3 if the costs are (k1 , k2 ). Proposition 2 shows that for each cost r there is a threshold f (r) < r such that if the cost of the worse advocate is r and the cost of the better advocate is less than f (r) then taxing the better advocate benefits the voter. Conversely, if the cost of the better advocate is between f (r) and r then taxing the better advocate reduces voter utility. Let F denote the collection of continuous, non-decreasing, real valued functions with the following properties: there is 0 < r¯ < ∞ such that f (r) = 0 for r ≤ r¯ < ∞ and f strictly increasing for r > r¯ with f (r) < r and f → ∞ as r → ∞. Proposition 2:

There is f ∈ F such that (k1 − f (k2 ))

dU3∗ (k1 , k2 ) g(z) and g(z) → 0 as z → 0. Figure 1 below depicts the graph of the function g. —- Insert Figure 1 here ——— Proposition 4 shows that if α2 is below g(α1 ) then an increase in the cost of advocate 1 is beneficial for the voter and when α2 > g(α1 ) then an increase in the cost of advocate 2 is harmful for the voter. Proposition 3:

If g(α1 ) = α2 then (g(α1 ) − α2 )

dU3∗ (k1 , k2 ) >0 dk1

Propositions 2 and 3 examine the case where only the low cost advocate is subject to a tax. In some cases, such a tax may be infeasible. The next proposition shows that even a uniform tax on information provision can be beneficial provided that the high cost advocate has sufficiently high cost. Let α = (α1 , α2 ) be the equilibrium of the simple war of information with costs (k1 + t, k2 + t) and let U3∗ (k1 , k2 , t) denote the equilibrium payoff of player 3. Proposition 4:

For every k1 there is k¯2 such that for k2 > k¯2 dU3∗ (k1 + t, k2 + t) >0 dt t=0 12

Propositions 4 follows as a corollary to Propositions 3 and 4 once we observe that the effect of a small increase in k2 is negligible compared to the effect of a small increase in k1 when k2 is large. 2.1

Arbitrary Drift and Variance The simple war of information assumes that the drift of Xt is µ ∈ {−1/2, 1/2} and the

variance of Xt is σ 2 = 1. Second, the prior probability that µ = 1/2 is 1/2. As we show in this section, these assumptions are normalizations that are without loss of generality. First, we consider an arbitrary value for the variance σ 2 of Xt . We can rescale time so that each new unit corresponds to

1 σ2

old units. Hence, the cost structure with the new

time units is k∗ = σ 2 k, where k = (k1 , k2 ) is the cost structure with the old time units. Note also that the variance of Xt with the new time units is 1. Hence, all of the analysis above applies after replacing k with k∗ . Next, we generalize the values for the drift but maintain the other assumptions. Let π = γ = 1/2, σ 2 = 1 and consider values of µ1 , µ2 such that µ1 − µ2 > 0. By Bayes’ Law the conditional probability of advocate 1 holding the correct position, given Xt is: pt = where A(t) = −(µ1 − µ2 ) +

1 1 + eA(t)

(µ1 −µ2 )(µ1 +µ2 )t . 2

Xt ≥

Advocate 2 trails if pt ≥ 1/2; that is, if

µ1 + µ2 ·t 2

More generally, player i is trailing whenever Ii (Xt −

µ1 +µ2 2

· t) < 0. When µ1 + µ2 = 0 the

optimal strategy of the voter and hence for the advocates will be time-dependent. Suppose player i quits when Xt = Yti for Yti defined by Yti = yi +

µ1 + µ2 ·t 2

(10)

for yi such that Ii (yi ) > 0. Hence, the moment player i quits, (i.e., Xt = Yti ) we have pt =

1 1 + e−yi 13

Thus, strategies (y1 , y2 ) described in (10) are stationary in the sense that they are time2 independent functions of pt . Moreover, player i is trailing whenever Ii (Xt − µ1 +µ · t) < 0. 2

Hence, the winning probabilities and the expected costs for the strategy profile (y1 , y2 ) in this game are the same as the winning probabilities and the expected costs associated with y = (y1 , y2 ) in the simple war of information and the analysis of section 1 applies. Combining the arguments of this subsection establishes that Propositions 1-3 generalize to the case of arbitrary µ1 , µ2 , and σ 2 provided we replace ki with Let δ =

2

σ µ1 −µ2 ;

σ 2 ki µ1 −µ2

for i = 1, 2.

hence 1/δ is the precision of the campaign. Then, we can state payoffs

associated with the profile α = (α1 , α2 ) in the game with arbitrary σ 2 , µ1 , µ2 as follows: αi  1 + αi  1 − ki αj δ ln α1 + α2 1 − αi 1 α1 α2 U3 (α) = + 2 α1 + α2 Ui (α) =

(11)

where i, j ∈ {1, 2}, j = i. If αi = 1, then Ui (α) = −∞ for i = 3. Comparing (11) with the payoffs for the simple war of information described in Lemma 1, reveals that the analysis in the previous sections extends immediately to the case of general µ1 , µ2 , and σ 2 . The parameter 1/δ measures the precision of the information in the game. Proposition 5 below utilizes (11) to establish the limits as δ converges to zero or infinity. Let h : IR+ → [0, 1] be defined as h(x) =

 1  x + 2 1 − x + x2 − 2 3x

and note that h(1) = 1/3, h(0) = 0 and h → 1 as r → ∞. Proposition 5:

Let α = (α1 , α2 ) be the unique equilibrium of the war of information

with the cost structure (k1 , k2 ) and precision 1/δ. Then, (i) lim Uj (α) = 1/2 = lim U3 (α) − 1/2 δn →0

δn →0

(ii) lim Uj (α) = h(kj /ki ); lim U3 (α) = 0 δn →∞

δn →∞

for i, j = 1, 2, j = i. Proposition 5 states that as the information becomes very precise, the voter always makes the correct decision and the advocates information costs vanish. As information becomes very imprecise, no information will be revealed - hence the voters’s payoff converges 14

to 1/2 - but advocates receive a positive payoff that depends on the ratio of their costs. If the costs are equal then this payoff is 1/3. If one advocate has a large cost advantage then this advocate will receive a payoff of 1 (and his opponent receives a payoff of zero.)

3.

Nonstationary Strategies and Subgame Perfection In this section, we relax the restriction to stationary strategies. Our objective is

to show that the unique equilibrium of the simple war of information is also the unique subgame perfect equilibrium of the dynamic game. With nonstationary strategies, it is possible to have Nash equilibria that fail subgame perfection. To see this, let α ˆ 2 = B2 (1) and α ˆ 1 = B1 (ˆ α2 ), where Bi are the stationary best response functions analyzed in Section 2. Hence, α ˆ 2 is advocate 2’s best response to an opponent who never quits and α ˆ 1 is advocate 1’s best response to an opponent who quits at α ˆ2. Define the function ai : IR → [0, 1] as ai (x) = (−1)−i (1 − 2p(x)) where p is as defined in (3). Consider the following strategy profile: α2 = α ˆ 2 and α1 = α ˆ1 if a2 (Xτ ) < α ˆ 1 for all τ < t and α1 = 1 otherwise. Hence, advocate 2 plays the stationary ˆ 1 along any history that does not require strategy α ˆ 2 while advocate 1 plays the strategy α advocate 2 to quit. But if 2 deviates and does not quit when he is supposed to, then advocate 1 switches to the strategy of never quitting. To see why this is a Nash equilibrium, note that 1’s strategy is optimal by construction. For player 2, clearly, quitting before Xt reaches x is suboptimal. Not quitting at Xt = x is also suboptimal since such a deviation triggers α1 = 1. However, the strategy profile is not subgame perfect because the prescribed behavior for 1 after a deviation by 2 is suboptimal: ˆ 1 , advocate 1 would be better off quitting. at any Xt such that a1 (Xt ) < α To simplify the analysis, we will utilize a discrete version of the war of information. Advocates choose their action and observe the stochastic process Xt only at times t ∈ {0, ∆, 2∆, . . .}. The initial state is x0 , i.e., X0 = x0 . We refer to t = n∆ as period n. Each period n, player i chooses αi ∈ [0, 1]. The game ends at t ∈ [(n − 1)∆, n∆] if t = inf{τ ∈ [n∆, (n + 1)∆] | ai (Xτ ) ≤ αi for some i = 1, 2} 15

If {τ ∈ [(n − 1)∆, n∆] | ai (Xτ ) ≤ αi for some i = 1, 2} = ∅ the game continues and the players choose new αi ’s in period n + 1. Note that αi ≤ ai (x0 ) means that player i quits immediately. A pure strategy for player i in period n associates with every history (X0 , . . . , X(n−1)∆ ) an action: Definition:

A pure strategy for player i is a sequence f i = (f1i , f2i , . . .) such that fni :

IRn → [0, 1] is a measurable function for all n. Let n∗ be the smallest integer n such that for some t ∈ [(n − 1)∆, n∆] and some i = 1, 2 ai (Xt ) ≤ fni (X0 , . . . , X(n−1)∆ ) If n∗ = ∞, set T = ∞. If n∗ < ∞ let T = inf{t ∈ [(n∗ − 1)∆, n∗ ∆] | ai (Xt ) ≤ fni ∗ (X0 , . . . , X(n∗ −1)∆ ) for some i = 1, 2} The game ends at time T . Given the definition of T , the payoffs of the game are defined as in the previous section (expressions (6)-(8)). The advocate’s payoffs following a history ζ = (x0 , x1 , . . . , xk−1 ) are defined as follows: i Let fˆ = (fˆ1 , fˆ2 ) where fˆni (ˆ x0 , . . . , x ˆn−1 ) = fn+k (ζ, x ˆ0 , . . . x ˆn−1 ) for all n ≥ 2. Hence, we i i ˆ refer to ζ ∈ IRn as a subgame and let U(ζ,ˆ ˆ0 (f ). x0 ) (f ) = Ux

Definition:

The strategy profile f is a subgame perfect Nash equilibrium if and only if 1 1 ˜1 2 U(ζ,ˆ x0 ) (f ) ≥ U(ζ,ˆ x0 ) (f , f ) 2 2 1 ˜2 U(ζ,ˆ x0 ) (f ) ≥ U(ζ,ˆ x0 ) (f , f )

for all f˜1 , f˜2 . Let E be the set of all subgame perfect Nash equilibria and let Ei be the set of al subgame perfect Nash equilibrium strategies of player i; that is, E i = {f i | (f 1 , f 2 ) ∈ E for some f j , j = i} 16

Let α = (α1 , α2 ) be the unique equilibrium of the simple war of information studied in the previous section. Without risk of confusion, we identify α ∈ [0, 1] with the constant function fni = α and the stationary strategy f i = (α , α , . . .). The proposition below establishes that the stationary strategy profile α is the only subgame perfect Nash equilibrium of the game. Proposition 6:

The strategy profile α is the unique subgame perfect Nash equilibrium

of the discrete war of information. Proof: See Appendix. Next, we provide intuition for Proposition 6. Let α ¯ i be the supremum of i’s strategy and let αi be the infimum. We show in the proof of Proposition 5 B2 (α1 ) ≥ α ¯2 and α2 ) ≤ α1 B1 (¯ and therefore α2 ) ≤ α1 B1 (B2 (α1 )) ≡ φ(α1 ) ≤ B1 (¯ The stationary equilibrium is a fixed point of φ and, moreover, φ has slope less than 1 (Proposition 1). Therefore, φ(α1 ) ≤ α1 which implies that α1 ≥ α1 A symmetric argument shows that α ¯ 1 ≤ α1 and hence Proposition 6 follows. 3.1

Both Advocates Buy Information Throughout, we have assumed that only the trailing advocate buys information. We

made this assumption to simplify the analysis, in particular, to avoid having to specify the process of information when both advocates provide information. 17

A general model would allow both advocates to provide information. Note that the equilibrium of the simple war of information characterized above remains an equilibrium in a game where the leading advocate can provide information. Given a stationary strategy by the opponent, an advocate has no incentive to provide information when he is ahead. To see this, note that pt is a martingale and therefore a player cannot increase the probability of winning (even if it may change the speed of learning) by providing information when he is ahead. Since information provision is costly it follows that such a deviation can never be profitable. The simplest extension is a model where no additional information is generated when both advocates provide information. In that case, if players cannot observe who provides information then it is a dominant strategy for the leading advocate not to provide information. It can be shown that even if the identity of the information provider is observable, the equilibrium of the simple war of information remains the unique subgame perfect equilibrium in this case. In the case where simultaneous purchase of information by both advocates leads to faster learning, there may be subgame perfect equilibria that are not equilibria of the simple war of information.

4.

Asymmetric Standards of Proof To this point, we have assumed that the voter adopts advocate 1’s position if and only

if the probability that 1 has the correct position is greater than 1/2. In this section, we relax this symmetry and consider an arbitrary threshold γ ∈ [0, 1] such that player 3 rules in favor of advocate 1 if pT > γ and in favor of advocate 2 if pT < γ. The purpose of this extension is to examine the effect of different standards of proof on information provision. Suppose players 1 and 2 are litigants and player 3 is the judge. Suppose further, that the judge is committed (by law) to a particular standard of proof. Proposition 9 below characterizes the optimal γ, i.e., the optimal standard of proof. Let W γ denote the simple war of information with threshold (standard of proof) γ. As before, let pt denote the probability that player 3 assigns at time T to player i having the correct position. Player 1 is trailing if pt < γ and player 2 is trailing if pt > γ. As 18

before, we assume that only the trailing player can provide information. A stationary strategy for player 1 is denoted z1 ∈ [0, γ] and a strategy for player 2 is a z2 ∈ [γ, 1]. The interpretation is that player 1 quits when pt ≤ z1 and player 2 quits when pt > z2 . As we have argued in the previous section, the equilibrium of the war of information stays unchanged if we change the prior from 1/2 to π since a different prior can be thought of as a different starting point of the stochastic process X. It is convenient to fix the starting point x0 so that 1 =γ 1 + e−x0

p0 = p(x0 ) =

and hence the starting point of the stochastic process pt is equal to the threshold γ. Let z = (z1 , z2 ) and let vi (z) denote the probability that player i wins given the strategy profile z; that is, γ − zi zj − z i

viγ (z) =

for j = i. To compute the advocates’ cost associated with the strategy profile y, define C : [0, 1] → {0, 1} such that C γ (s) =



1 0

if s < γ otherwise

then, the expected information cost of player i given the strategy profile z is cγi (z)

ki = E 4



T

Ii (C(pt ))dt 0

Then, the advocates’ expected utilities are Uiγ (z) = viγ (z) − cγi (z)

(12)

while the judge’s expected utility, (i.e.,, the accuracy of the trial) is: U3γ (y) = E[max{pT , 1 − pT }]

(13)

The following proposition shows that the results for the simple war of information carry over to W γ . Proposition 7:

The simple war of information with threshold γ has a unique equilibrium

(z1 , z2 ) with z1 < γ < z2 . 19

Proof: See Appendix. Consider the situation where player 3 places the same weight on both mistakes, i.e., U3 = max{pT , 1 − pT } where T is the time when the process of information provision stops. Assume the threshold γ is chosen independently of player 3’s utility function to maximize U3 . In other words, player 3 commits to a threshold γ prior to the game. If player 3 commits to γ ∈ (0, 1) then advocates 1 and 2 play the game W γ . The results below analyzes how a change in γ affects player 3’s payoff. For the remainder of this section, we assume that in the initial state X0 the belief is 1/2, i.e., p0 = 1/2. Note that the equilibrium strategies characterized in Proposition 8 remain equilibrium strategies for p0 = 1/2. However, for z1 ≥ 1/2 or z2 ≤ 1/2 the game stops immediately. In that case, any strategy profile for which the game stops immediately would be an equilibrium. To avoid that trivial case, Proposition 8 assumes that at the initial state X0 we have p(X0 ) = γ. Note that if the game stops immediately then the payoff of player 3 is 1/2. In that case, the value of the information generated by the war of information is zero. Therefore, the Lemma below considers the case where z1 < 1/2 < z2 and characterizes how a change in γ affects player 3’s payoff in that case. Lemma 3:

Let (z1 , z2 ) be the equilibrium of W γ and assume that z1 < 1/2 < z2 . Then,

U3 is increasing in γ if (z2 − γ)(2z2 − 1)2 (γ − z1 )(1 − 2z1 )2 < z22 (1 − z2 )2 z12 (1 − z1 )2

Proof: See Appendix. Increasing γ implies that player 1 incurs a greater share of the cost of the war of information. The optimal choice of γ depends on the costs of players 1 and 2. Assume that player 1 is the low cost advocate, i.e., k1 < k2 and Proposition 2 then implies that 20

for γ = 1/2 player 1 wins with greater probability than player 2. This follows because, in equilibrium 1/2 − z1 > z2 − 1/2 and the win probability of advocate i is equal to vi (z) =

1/2 − zi zj − zi

The next proposition shows that at the optimal γ it must be the case the player 2 wins with greater probability than player 1. Hence, the optimal γ implies that the high cost advocate wins with greater probability than the low cost advocate. Recall, that we have assumed that π = 1/2 and therefore both candidates have an equal probability of holding the correct position. We denote with U3∗ (γ) the equilibrium utility of player 3 if 3 commits to γ ∈ (0, 1). Let U3∗ (γ) be maximal at γ ∗ and let (z1∗ , z2∗ ) ∗

be the equilibrium of W γ . Proposition 8:

If k1 < k2 then γ ∗ > 1/2 and v2 (z ∗ ) > v1 (z ∗ ).

Proof: See Appendix. Proposition 9 shows that it is optimal to shift costs to the low cost advocate. Moreover, at the optimal threshold the low cost advocate wins with lower probability than the high cost advocate. Hence, the shift in the threshold more than compensates for the initial cost advantage.

5.

Asymmetric Information In this section, we study the war of information when players are asymmetrically

informed. We assume player 1 knows the state while players 2 and 3 have the same information as in section 2. We first analyze the case in which only advocate 1 provides information. Hence, we consider a game with one advocate who is one of two types; type 0, knows that µ = −1/2 and type 1 knows that µ = 1/2. We refer to this game as the one-sided war of information with asymmetric information. 21

The voter prefers action 1 if the probability of state 1 is greater than 1/2. Therefore, the game ends whenever the voter’s belief reaches the threshold 1/2.3 The prior is π is strictly less than 1/2. Let (Ωi , F i , P i ) be probability spaces for i ∈ {0, 1}. Let X i be a Brownian motion on (Ωi , F i , P i ) with mean (−1)i−1 /2, variance 1, and let Fti be the filtration generated by X i . Type 0 knows that the signal is X 0 and type 1 knows that the signal is X 1 . A mixed strategy for type i = 0, 1 is Qi where Qit (ω) denotes the probability that player i quits by time t given the sample point ω. The strategy Qit (ω) is right-continuous and non-decreasing in t for every ω. Moreover, Qit is Fti measurable to ensure that type’s strategies are feasible given their information. Given a strategy profile Q = (Q0 , Q1 ) we can determine the belief of the voter that the advocate’s position is correct. For the one-sided war of information with asymmetric information, we say that the history (ω, t) is terminal Q0t (ω) = Q1t (ω) = 1. At any nonterminal history, pt (ω) =

1 − Q1t (ω) 1 − Q1t (ω) + (1 − Q0t (ω))e−Xt (ω)

(14)

We say that the belief process p is consistent with the strategy profile Q if for all nonterminal (ω, t), (14) is satisfied. A belief process p is monotone if for all t > τ pt (ω) ≥

pτ (ω) pτ (ω) + (1 − pτ (ω))eXτ (ω)−Xt (ω)

(M )

Hence, a monotone belief process has the property that “not quitting” can never be evidence of weakness (type 0). A monotone equilibrium is a perfect Bayesian Nash equilibrium consistent with monotone beliefs. Note that he game ends if pt ≥ 1/2 or if type i quits and define  vt (ω) =

1 if pτ (ω) ≥ 1/2 for some τ ≤ t 0 otherwise

Then, Dti (ω), the probability that the game ends by time t, is Dti (ω) = (1 − vt (ω))Qt (ω) + vt (ω) 3

This is an implication of the assumption that only the trailing advocate can provide information.

22

Hence, if the game ends at time t, the payoff of the advocate is vt (ω) − tk/4 The payoff of type i is therefore 



E

(vτ (ω) − τ ki /4) dDτi

τ =0

where the expectation is taken with respect to P i . Similarly, taking the expectation above with respect to P i (·|{Xτ (ω)}τ ≤t ) yields the payoff given the history {Xτ (ω)}τ ≤t . Next, we define a class of strategies Qz indexed by the parameter z ∈ IR. Let Yt = inf τ p(z ∗ ). In that case, beliefs are affected only by the information generated by the signal Xt . When pt = p(z ∗ ) beliefs are also affected by the quit decision of type 0. In fact, type 0 quits at a rate that exactly offsets any negative information revealed by the signal Xt . If the advocate has not quit and Xt = x < z ∗ then one of the following must be true: either he is type 1 or he is type 0 but by chance his random quitting strategy had him continue until time t. The probability ∗

of type 0 quitting by time t is 1 − ex−z . Hence, if x is “very negative,” the advocate counters the public information Xt = x with his private information. An observer who ignores the signaling component might incorrectly conclude that the voter chooses the wrong position. With positive probability, evidence that in a nonstrategic environment would indicate that the advocate holds the incorrect position (i.e., XT < 0) will result in the voter adopting the advocate’s favored position. Furthermore, recent (positive) public information is given greater weight than past negative information conditional on the advocate not having quit. Specifically, for a given Xt = x the belief pt is increasing in Yt := inf τ ≤t Xt (conditional on the advocate not quitting at t). Hence, recent 24

public information favoring the advocate’s position more than outweighs past unfavorable public information. Note that when π < p(z ∗ ) then type 0 must quit with strictly positive probability at time 0 and conditional on the advocate not-quitting the voters beliefs jump to p(z ∗ ). The uniqueness result in Proposition 9 relies on the fact that public information arrives continuously. To see this, first note that neither player can quit in any monotone equilibrium if pt > p(z ∗ ). This follows from a straightforward calculation using the lower bound of inequality (M). This implies that the public signal Xt governs the beliefs pt for pt > p(z ∗ ). Moreover, the belief pt cannot “jump above” pt because that would require player 0 to quit with positive probability at a point where he receives a strictly positive payoff. As a result, in any monotone equilibrium player 0 has a continuation payoff of zero when the beliefs reach p(z ∗ ) and if pt were to drop below p(z ∗ ) then player 0 has to quit. We show that this cannot happen. First note that there is some p < p(z ∗ ) such that player 1 cannot quit for any pt > p. This follows because player 1 assigns greater probability to pt reaching 1/2 than player 0. Second, note that (M) implies that pt cannot “jump down” and therefore pt < p(z ∗ ) implies that pt ∈ (p, p(z ∗ )) for some t ≤ t. If pt remains in the interval (p, p(z ∗ )) for some non-negligible time interval (with positive probability) then the advocate must be player 1 and hence pt = 1 which contradicts the hypothesis that pt ∈ (p, p(z ∗ )). The equilibrium outcomes of the one-sided war of information with private information are independent of the cost of information provision. Player 1 wins with probability 1 and, as the following proposition demonstrates, player 0 wins with probability Proposition 10:

π 1−π .

The probability that player 0 wins in the one-sided war of information

is independent of k and given by

π 1−π .

Proof: Note that the probability that player 1’s preferred action is implemented is 1. Let v0 denote the probability that type 0 wins. Since the voter’s beliefs are a martingale it follows that π=

1 (π + (1 − π)v0 ) 2 25

and therefore v0 =

π 1−π

The equilibrium above can be extended to the case where there is a second advocate (as in the previous sections). As before, we refer to the informed advocate as type 1 (if µ = 1/2) or type 0 (if µ = −1/2). The uninformed advocate is player 2. Consider the measurable space (Ω2 , F 2 ) where Ω2 = Ω0 ∪ Ω1 and F 2 = {A0 ∪ A1 | Ai ∈ F i , i = 0, 1} Since payer 2 does not know µ, his beliefs about the signal are described the probability P 2 on (Ω2 , F 2 ) such that for all A0 ∈ F 0 , A1 ∈ F 1 , P 2 (A0 ∪ A1 ) =

1 0 0 (P (A ) + P 1 (A1 )) 2

Define the players’ filtration on (Ω2 , F 2 ) as follows: Ft2 = {A0 ∪ A1 | Ai ∈ Fti and Xτ0 (A0 ) = Xτ1 (A1 ) for all τ ≤ t} A mixed strategy for player 2 is denoted Q2 where Q2t (ω) denotes the probability that player 2 quits by time t given the sample point ω. The strategy Q2t (ω) is right-continuous and non-decreasing in t for every ω. Moreover, Q2t is Ft2 measurable to ensure that player’s strategies are feasible given their information. A mixed strategy for types 0 and 1 is as defined above. For the remainder of this section, we refer to type i ∈ {0, 1} as “player i” to simplify the exposition. For Q = (Q0 , Q1 , Q2 ), we can compute, Dti (ω), the probability that i ∈ {0, 1, 2} assigns to the event that the game ends by time t for a particular sample point ω. Define   Q01 = 12 Q0 + Q1 and note that since advocates can only quit when trailing, for all ω, Qit (ω) and Q2t (ω) have no common points of discontinuity for i = 0, 1. Therefore:  Dti (ω)



t

(1 −

=

i Qj(i) τ (ω))dQτ (ω)

τ =0

t

(1 − Qiτ (ω))dQj(i) τ (ω)

+ τ =0

26

(14)

where j(0) = j(1) = 2 and j(2) = 01. Note that the first term on the right-hand side of (14) is the probability that advocate 1 (type 0 or 1) ends the game at some time τ ≤ t. The second term is the corresponding expression for player 2. The probability that advocate i wins is

 vi (Q) = 1 − E



j(i)

(1 − Qt

(ω))dQit (ω)

(15)

t=0

where expectations are taken with the probability P i . Consistent and monotone belief processes p are defined as above. For the belief process p define advocate i’s expected cost as follows: ki  E ci (p, q) = 4





t=0



t

Ii (C(pτ ))dτ dDti (ω)

(15)

τ =0

for i = 0, 1, 2. Define the advocates’ utilities as Ui (p, Q) = vi (Q) − ci (p, Q)

(16)

Note that the payoff to the voter is pt if Ct (pt ) = 1 and 1 − pt if Ct (pt ) = 0. Hence, the expected payoff of player 3 is: 



U3 (p, Q) = E

I2−Ct (ω)



 3 pt (ω) dDt (ω)

(17)

t=0

Recall that Ytz = min{0, Yt − z}. Define the pure strategy Rxy as follows: Rtxy =



1 if Xt ≥ Ytx + y 0 otherwise

and let Q(x, y) denote the strategy profile Q0 = Qx , Q1 ≡ 0, Q2 = Rxy for some x < 0 < y. The strategy profile Q(x, y) has the following properties: There are belief thresholds p(x) < 1/2 < p(y) such that (1) type 0 does not quit for pt > p(x) and randomizes when pt = p(x). Moreover, pt ≥ p(x) along every path; (2) Player 2 quits if and only if pt ≤ p(y). Let y ∗ satisfies 1 + B2 (1) 1 = 2 1 + e−y∗ 27

Hence, y ∗ corresponds to the optimal threshold of player 2 against an opponent who never quits. (Recall that

1+α2 2

is the belief threshold corresponding to the action α2 in the simple

war of information). Lemma 4:

The strategy R(xy



)

is the unique best response to (Q0 , Q1 ) with Q0 = Qx

for some x < 0 and Q1 ≡ 0. In the proof of Lemma we show that the payoff of the uninformed player (player 2) is as in the symmetric information game when player 1 never quits. Therefore, the best response of player 2 is a belief-threshold that corresponds to the strategy α2 = B2 (1) in the symmetric information game, i.e.,

p(y ∗ )=1+α2 . 2

To understand the intuition for this

result, note that advocate 1 only quits if he is type 0. Since beliefs are a martingale this implies that for every pt < 1/2 the probability that beliefs return to 1/2 is 2pt . Note that this is independent of the location of the signaling barrier p(x). One possible value for the signaling barrier is p(−∞) = 0 in which case advocate 1 never quits. For that case, the unique best response of player 2 was derived above in the analysis of the simple war of information. The value y ∗ corresponds to this best response. Define x∗ to be the solution of the following maximization problem: max Π0 (x, y ∗ ) := x≤0

Lemma 5:

  1 −x y∗ −x − 2k (1 − e )(1 − e (1 + x)) 1 − e 1 1 − ey∗ −x

(∗)

The maximization problem (*) has a unique solution x∗ < 0.

Proposition 11 shows that the strategy profile Q(x∗ , y ∗ ) is a Nash equilibrium. Proposition 11:

The strategy profile (Q(x∗ , y ∗ ) is an equilibrium of the war of infor-

mation with asymmetric information. Note that since type 1 never quits, there are no histories after which beliefs are not well-defined. Hence, the equilibrium in Proposition 10 is also a sequential equilibrium. Since the informed player only quits when he is type 0 the exact location of the signaling barrier is payoff irrelevant for players 2 and 3. All that matters for those players is that player 1 never gives up unless he has the wrong position. This implies that changing the 28

costs of player 1 has no effect on the payoff of players 2 and 3. If player 1 has very high costs of information provision then the signaling barrier p < 1/2 will be close to 1/2 but the information revealed by the war of information is unchanged. On the other hand, the cost of player 2 affects the payoff of his opponents; a lower cost of player 2 always increases the payoff of player 3 and reduces the payoff of player 1. The implications of a tax on the advocates’ information provision has different implications in the asymmetric information case than in the symmetric information case. Increasing the cost of advocate 2 always reduces the voter’s welfare while a change in the cost of advocate 1 has no effect on voter welfare. From the voter’s perspective, advocate 1 never gives up (unless he is incorrect) and hence irrespective of cost, advocate 1 follows the more aggressive strategy. Hence, increasing the cost of advocate 2 will always reduce voter payoff. Voter’s might benefit if advocate 1 could be induced to behave less aggressively. However, a tax on advocate 1 cannot accomplish this goal because type 1’s equilibrium strategy will be unaffected by such a tax. On the other hand, a tax on the informed advocate has the benefit of shortening the duration of the game without affecting the informativeness of the outcome.

29

6.

Appendix

6.1

Proof of Lemma 1   Let E C(Xt )| µ = r be the expected cost incurred by player 1 given the strategy

profile y = (y1 , y2 ) and µ = r. (Recall that σ 2 = 1.) Hence, the expected delay cost of player 1 is:       E C(Xt ) = 1/2 E C(Xt ) 1/2 + 1/2 E C(Xt ) − 1/2

(A1)

First, we will show that  1 E C(Xt )|µ] = 2 2µ



1 − e−2µy2 1 − e−2µ(y2 −y1 )





 1 − e2µy1 (1 + 2µy1 )

(A2)

For z1 ≤ 0 ≤ z2 , let P (z1 , z2 ) be the probability that a Brownian motion Xt with drift µ and variance 1 hits z2 before it hits z1 given that X0 = 0. Harrison (1985) shows (p. 43) shows that P (z1 , z2 ) =

1 − e2µz1 1 − e−2µ(z2 −z1 )

(A3)

For z1 ≤ 0 ≤ z2 , let T (z1 , z2 ) be the expected time a Brownian motion with drift µ spends until it hits either z1 or z2 given that Xt = 0 (where z1 ≤ 0 ≤ z2 ). Harrison (1985) shows (p. 53) that T (z1 , z2 ) =

z2 − z1 z1 P (z1 , z2 ) + r r

 To compute E C(Xt )|µ], let ∈ (0, y2 ] and assume that player 1 bears the cost until Xt ∈ {−y1 , }. If Xt = then player 2 bears the cost until Xt+τ ∈ {0, y2 }. If Xt+τ = 0 then process repeats with player 1 bearing the cost until Xt+τ +τ  ∈ {−y1 , } and so on.  Clearly, this yields an upper bound to E C(Xt )|µ]. Let T denote that upper bound and note that T = T (y1 , ) + P (y1 , )(1 − P (− , y2 − ))T Substituting for T (y1 , ) and P (y1 , ) we get 

µT =

( − y1 )(1 − e2µy1 ) + y1 1 − e−2µ( −y1 )



(1 − e2µy1 )(e−2µ − e−2µy2 ) 1− (1 − e−2µ( −y1 ) )(1 − e−2µy2 )

30



and therefore  1 E C(Xt )|µ] ≤ lim T = →0 2µ2



1 − e−2µy2 1 − e−2µ(y2 −y1 )





 1 − e2µy1 (1 + 2µy1 )

Choosing < 0 we can compute an analogous lower bound which converges to the right hand side of (A2) as → 0. This establishes (A2). Recall that p(yi ) =

1 1+e−yi

and α1 = 1 − 2p(y1 ), α2 = 2p(y2 ) − 1. Then, (A1), (A2)

yield  4α1 · α2 1 + α1  ln E C(Xt ) = α1 + α2 1 − α1 Let v be the probability that player 1 wins. Since PT is a martingale and T < ∞ vp(y2 ) + (1 − v)p(y1 ) = pT = 1/2 Hence, v=

α1 α1 + α2

The last two display equations yield α1 U1 (α) = α1 + α2



1 + α1 1 − k1 α2 ln 1 − α1

(A5)

A symmetric argument establishes yields the desired result of U2 . 6.2

Proof of Lemma 2 By Lemma 1, advocate i’s utility is strictly positive if and only if  αi ∈

0,

1

e ki αj − 1



1

e ki αj + 1 Furthermore, throughout this range, Ui (·, αj ) is twice continuously differentiable and strictly concave in αi . To verify strict concavity, note that Ui can be expressed as the product of two concave functions f, g that take values in IR+ , where one function is strictly 









increasing and the other strictly decreasing. Hence, (f · g) = f g + 2f g + f g < 0. 31

Therefore, the first order condition characterizes the unique best response of player i to αj . Player i’s first order condition is: Ui =

2αi2 ki 1 − αi2

(A6)

Note that (A6) implicitly defines the best response functions Bi . Equation (A6) together with the implicit function and the envelop theorems yield ∂Ui (1 − αi2 )2 dBi = · dαj ∂αj 4αi ki

(A7)

Equation (A5) implies 1 ∂Ui =− ∂αj α1 + α2 Note that (A8) implies

∂Ui ∂αj



 Ui + αj ki ln

1 + αi 1 − αi



< 0. The three equations (A6), (A7), and (A8) yield

  dBi αi (1 − αi2 ) 1 + αi 1 − αi2 =− ln · 1+ dαj 2(α1 + α2 ) 2αi 1 − αi  Then using the fact that ln

1+αi 1−αi





2αi 1−αi

dB1 dB2 dα2 dα1

α1 α2 (α1 +α2 )2

(A10)

we have

0 < φ(α1 ) ≤ Note that the

(A9)

yields

αi (1 − αi2 )(2 + αi ) dBi ≥− dαj 2(α1 + α2 ) Hence, since φ =

(A8)

α1 (1 − α12 )(2 + α1 )α2 (1 − α22 )(2 + α2 ) 2

4(α1 + α2 )

(A11)

≤ 1/2 and hence, φ (α1 ) < 1 if √ (1 − αi2 )(2 + αi ) < 2 2

It is easy to verify that the left-hand side of the equation above reaches its maximum at √ αi < 1/2. At such αi , the left-hand side is no greater than 5/2 < 2 2, proving that 0 < φ (α1 ) < 1. 32

6.3

Proof of Proposition 1

Part (i): By Lemma 2, Bi are decreasing, continuous functions. It is easy to see such  1 that Bi (1) > 0 and limr→0 Bi (r) = 1+2ki (Note that Ui → 1 if αj → 0 for j = i). Hence, we can continuously extend Bi and hence, φ to the compact interval [0, 1], so that φ much have a fixed-point. Since Bi s are strictly increasing, Bi (0) < 1 implies that neither 0 nor 1 is a fixed-point. Hence, every fixed-point of φ must be in the interior of [0, 1]. Let r be the infimum of all fixed-points of φ. Clearly, r itself is a fixed-point and hence r ∈ (0, 1). Since φ (r) < 1, there exists ε > 0 such that φ(s) > s for all s ∈ (r, r + ε). Let s∗ = inf{s ∈ (0, 1) | φ(s) = s}. If the latter set in nonempty, s∗ is well-defined, a fixedpoint of φ, and not equal to r. Since φ(s) < s for all s ∈ (r, s∗ ), we must have φ(s∗ ) ≥ 1, contradicting Lemma 2. Hence, {s ∈ (0, 1) | φ(s) = s} = ∅ proving that r is the unique fixed-point of φ and hence the unique equilibrium of the simple war of information. Part (ii): Consider advocate 1’s best response as a function of both α2 and k 1 . The analysis in Lemma 2 ensures that B1 : (0, 1] × IR+ \{0} → (0, 1] is differentiable. Hence, the unique equilibrium of the simple war of information is characterized by B1 (B2 (α1 ), k1 ) = α1 Taking a total derivative and rearranging terms yields ∂B

1 dα1 = ∂k1dφ dk1 1 − dα1

where

dφ dα1

=

∂B1 ∂α2

α2 establishes

·

dB2 dα1 . 1 that ∂B ∂k1

By Lemma 1, φ < 1. Taking a total derivative of (A6) (for fixed < 0 and hence

dα1 dk1

< 0 as desired. Then, note that k1 does not

appear in (A6) for player 2. Hence, a change in k1 affects α2 only through its effect on α1 and therefore has the same sign as dB2 dB2 dα1 = · >0 dk1 dα1 dk1 By symmetry, we also have

dα2 dk2

< 0 and

dα1 dk2

(A12)

>0

Part (iii): By (ii), as ki goes to 0, the left-hand side of (A6) is bounded away from 0. Hence,

2α2i 1−α2i

must go to infinity and therefore αi must go to 1. Since Ui ≤ 1 it follows from 33

(A6) that ki → ∞ implies αi → 0. Fix (α1 , α2 ) and note that Bi (αj , ·) is a continuous function and hence by the above argument there is ki such that Bi (αj , ki ) = αi . 6.4

Proof of Propositions 3 and 4 From Lemma 1, we have: dU3 dα1 dα2 α22 α12 = + dk1 (α1 + α2 )2 dk1 (α1 + α2 )2 dk1

Since α2 = B2 (α1 ), (A9) and (A12) imply

dU3 dk1

< 0 if and only if



 α1 (1 − α22 )2 1 + α2 α2 2 − ln >0 · 1 − α2 + α1 2(α1 + α2 ) 2α2 1 − α2

(A13)

Define g : (0, 1] → (0, 1] by g(α1 ) = α2 where  

α2 α1 (1 − α22 )2 1 + α2 2 − ln · 1 − α2 + =0 α1 2(α1 + α2 ) 2α2 1 − α2 Proof of Proposition 3: First, note that g is well-defined. For any fixed α1 the right had side of (A13) is negative for α2 sufficiently close to zero and strictly positive for α2 = α1 . Note that

α1 2(α1 +α2 ) ,

1 − α22 , and the last term inside the square bracket are all decreasing

in α2 . Hence g is well defined. Note also that the right hand side of (A13) is decreasing in α1 . Hence, g must be increasing. Since the term in the brackets adds up to less than 1 it follows that g(z) < z. Setting ˆ 2 such that the left hand side of (A13) is zero. By the monotonicity of the α1 = 1, define α ˆ2. right hand side of (A13) in α1 it follows that g ≤ α Proof of Proposition 2 By part (i) of Lemma 2, the first term on the left-hand side of (A13) is increasing in k1 . Similarly,

α1 2(α1 +α2 ) ,

1 − α22 , and the last term inside the square

bracket are all decreasing in k1 . Furthermore, the terms inside the square bracket add up to a quantity between 0 and 1. Hence, g(z) < z and α1 ≤ α2 for k1 ≥ k2 . Next, note that as k1 goes to 0, α1 goes to 1 (by Proposition 1(iii) above). Setting ˆ 2 such that the left hand side of (A13) is zero. Let r¯ be such that α1 = 1, define α ˆ 2 . By the monotonicity of the right hand side of (A13) in α1 it follows that B2 (1) = α 34

(A13) is greater than zero for all α2 > α ˆ 2 . Conversely, for α2 < α ˆ 2 the proof of Proposition 3 above implies that there is a unique α1 ∈ (0, 1) such that the right hand side of (A13) is zero. It follows that there is f (k2 ) > 0 such that at k1 = f (k2 ),

dU3 dk1

= 0. Clearly, there

can be at most one such f (k2 ). The monotonicity of g and the monotonicity of αi in ki implies that

dU3 dk1

< 0 for k1 > f (k2 ),

dU3 dk1

> 0 for k1 < f (k2 ), and

dU3 dk1

= 0 at f (k2 ).

Since f (k2 ) is well-defined, since (α1 , α2 ) are continuous functions of (k1 , k2 ), and since g is continuous, the function f is also continuous. That f is strictly increasing follows from the strict monotonicity of g. That f → ∞ as r → ∞ follows from the fact that for every α1 > 0 the right hand side of (A13) is strictly negative for α2 sufficiently small. Proof of Proposition 4: Note that since α1 is increasing in k2 it follows that  α22 α12 dα2 dα2 dα1 dU3∗ ≥ + + dt t=0 (α1 + α2 )2 dk1 (α1 + α2 )2 dk1 dk2 From Proposition 4 we know that α22 α12 dα1 dα2 dU3∗ ≥ + >0 dt t=0 (α1 + α2 )2 dk1 (α1 + α2 )2 dk1 for k2 sufficiently large. Since that

dα1 dk1

is bounded away from zero for all k2 it suffices to show 

as k2 → ∞ and since to show that

dα2 dk1

=

dα2 dα1 dα1 dk1



dα2 dk2

 dα2 / →0 dk1

with dα2 dk2

dα1 dk1

bounded away from zero for all k2 it suffices

 dα2 / →0 dα1

Recall that the first order condition is 1 + α2 2α22 k2 α2 (1 − k2 α1 ln )= α1 + α2 1 − α2 1 − α22 and therefore: 1+α2 1+α2  dα  dα + α22 α1 ln 1−α −2α2 (α1 + α2 ) − α1 ln 1−α 2 2 2 2 / = (α2 + α1 ) 1+α2 2 dk2 dα1 (1 − α2 )(k2 α2 ln( 1−α + 1) 2

35

Note that α2 → 0 as k2 → ∞ and hence the right hand side of the above expression goes to zero as k2 → ∞ as desired. 6.5

Proof of Proposition 5 For part (i) suppose i chooses the strategy ai = 1 − . Then, for δ sufficiently small

we have Ui ≥

1−2 2−2

− for i = 1, 2. Since can be chosen arbitrarily small, the it follows

that Ui → 1/2 as δ → 0. The first order condition (A6) implies that αi → 1 which in turn implies that U3 → 1. For part (ii) note that αi → 0 as δ → ∞. Let k1 = 1, k2 = k and define r = α1 /α2 and z = α12 δ. Then, the first order condition (A6) can be re-written as 

 ln

1+α1 1−α1



1   = 2z 1 − zr 1+r α1    1+α2 ln 1−α2 1   = 2zkr 1 − zkr 1+r α2 These two equations imply that z, r must be bounded away from zero and infinity for small δ. Moreover as δ → ∞ it must be that αi → 0 for i = 1, 2. Therefore,   1+αi ln 1−αi →2 αi And therefore, the limit solution to the above equations must satisfy 1 (1 − 2zr) = 2z 1+r 1 (1 − 2zkr) = 2zkr 1+r We can solve the two equations for r, z and find Ui from the first order condition √   1 Ui = 2z = 3k k + 2 1 − k + k2 − 2 . 6.6

Proof of Proposition 6

Lemma A1: Let f i = (f1i , . . .), f j = (f1j , . . .), and f˜j = (f˜1j , . . .) for i = 1, 2 and j = i and let f˜nj (ζ) ≥ fnj (ζ) for every n and ζ ∈ IRn . Then, Uxi 0 (f 1 , f 2 ) ≥ Uxi 0 (f i , f˜j ). 36

Proof: Consider any sample path X(ω). Let T (ω), T˜(ω) denote the termination date corresponding to the strategy profile (f i , f j ) and (f i , f˜j ) respectively. Note that T (ω) ≤ T˜(ω) and therefore the cost of player i is larger if the opponent chooses f˜j . Furthermore, if T (ω) < T˜(ω) then player i wins along the sample path X(ω) when the strategy profile is (f i , f j ). Therefore, the probability of winning is higher under (f i , f j ) than under (f i , f˜j ).

Lemma A2:

Let f i = (ai , ai , . . .) be a stationary strategy and for j = i, let f j =

(f1j , Bj (ai ), Bj (ai ), . . .). (i) If Bj (ai ) < αj (x0 ) < f1j (x0 ), then Uxj0 (f 1 , f 2 ) < 0 and (ii) If Bj (ai ) ≥ f1j (x0 ) > αj (x0 ), then Uxj0 (f 1 , f 2 ) > 0. Proof: Let V (a, b) denote the payoff of player j if f1j = b and fnj , n ≥ 2 are chosen optimally, f i = (ai , . . .) and the initial state is α−1 (a). Let b∗ = arg maxb∈[0,1] V (1/2, b). It is easy to see that V is continuous and hence b∗ is well defined. Next, we show that V (a, b∗ ) ≥ V (a, b) for all a ∈ IR and for all b ∈ [0, 1]. To prove this, assume that b > b∗ and note that V (a, b) − V (a, b∗ ) = cV (b∗ , b) where c is the probability that X(t) reaches the state y = α−1 (b∗ ) for some t ∈ [0, ∆]. Since a is arbitrary it follows from the optimality of b∗ that V (b∗ , b) ≤ 0. Since the decision problem is stationary, it follows that fnj = b∗ is a best response to f i = (ai , . . .). This in turn implies that b∗ = Bj (ai ) and Uxj0 (f 1 , f 2 ) ≤ 0 if Bj (ai ) < αj (x0 ) < f1j (x0 ). Let b = f1j (x0 ). If Uxj0 (f 1 , f 2 ) = 0 then by the argument above f j = (b, b, ·) is also a best response to (ai , ai , . . .). But this contradicts the fact that B(ai ) is unique. Hence, a strict inequality must hold and part (i) of the Lemma follows. Part (ii) follows from a symmetric argument. Proof of Proposition 6: Let a ¯i = sup{αi (x) | f i = (f1i , . . .) ∈ E i , f i (x) > αi (x) for some x} ai = inf{αi (x) | f i = (f1i , . . .) ∈ E i , f i (x) ≤ αi (x) for some x} ¯i are respectively, the least and most patient actions for i observed in any Hence, ai , a ¯i . subgame perfect Nash equilibrium. Clearly, ai ≤ a∗i ≤ a ¯2 . To see this note that if the assertion is false, then First, we show that (i) B2 (a1 ) ≥ a there exists x0 ,(f 1 , f 2 ) ∈ E such that f12 (x0 ) > α2 (x0 ) > B2 (a1 ). By Lemma A1 and part (i) of Lemma A2, Ux20 (f 1 , f 2 ) ≤ Ux0 (a1 , f 2 ) < 0, contradicting the fact that (f 1 , f 2 ) ∈ E. 37

Next, we prove that (ii) B1 (¯ a2 ) ≤ a1 . If the assertion is false, then there exists x0 ,(f 1 , f 2 ) ∈ a2 ). Then, 0 = Ux10 (f 1 , f 2 ) and by Lemma A1 E such that f11 (x0 ) ≤ α1 (x0 ) < B1 (¯ ¯2 ) for all f˜1 . By Lemma A2 part (ii), there exists f˜1 Ux10 (f 1 , f 2 ) ≥ Ux10 (f˜1 , f 2 ) ≥ Ux10 (f˜1 , a such that Ux10 (f˜1 , a ¯2 ) > 0 and hence Ux10 (f 1 , f 2 ) > 0, a contradiction. The two assertions (i) and (ii) above together with the fact that Bi is nonincreasing a2 ) ≤ a1 . Since the slope of φ is always less than 1 yield φ(a1 ) = B1 (B2 (a1 )) ≤ B1 (¯ (Lemma 2), we conclude that a∗1 ≤ a1 and therefore a∗1 = a1 . Symmetric arguments to a1 ) ≤ a2 and B1 (a2 ) ≥ a ¯1 . Hence, the ones used to establish (i) and (ii) above yield B2 (¯ a1 )) ≥ B1 (a2 ) ≥ a ¯1 and hence a∗1 ≥ a ¯1 and therefore a∗1 = a1 = a ¯1 proving φ(¯ a1 ) = B1 (B2 (¯ that a∗1 is the only action that 1 uses in a subgame perfect Nash equilibrium. Hence, E1 = {a∗1 } and therefore E = {(a∗1 , a∗2 )} as desired. 6.7

Proof of Proposition 7 If (z1 , z2 ) is an equilibrium for π = γ then (z1 , z2 ) is also an equilibrium for π = 1/2.

If z1 < 1/2 < z2 then the converse is also true. In the following, we assume that π = γ. If Xt = x then the belief of player 3 that 1 holds the correct position is pγ (x) =

γ γ + (1 − γ)e−x

Let yi = p−1 γ (zi ) denote the strategy expressed in terms of the realization of X.   As in the proof of Lemma 1, let E C(Xt )| µ = r be the expected cost incurred by player 1 given the strategy profile y = (y1 , y2 ) and µ = r. (Recall that σ 2 = 1.) Recall that   1 E C(Xt )|µ = r = 2 2r







 1 − e−2ry1 (1 + 2ry1 )

(A2)

      E C(Xt ) = γ E C(Xt ) µ = 1/2 + (1 − γ) E C(Xt ) µ = −1/2

(A1)

1 − e2y2 1 − e−2r(y1 −y2 )

The expected delay cost of player 1 is:

Let v be the probability that player 1 wins. Since PT is a martingale and T < ∞, vz1 + (1 − v)z2 = EpT = γ, Hence, v=

γ − z1 z2 − z1 38

Substituting for yi we find that Ui , i = 1, 2 is given by γ − z1 U1 (z1 , z2 ) = z2 − z1



 1 − k1 (z2 − γ)

2γ − 1 1 − 2z1 (1 − z1 )/z1 ln + γ(1 − γ) γ − z1 (1 − γ)/γ



and z2 − γ U2 (z1 , z2 ) = z2 − z1



 1 − k2 (γ − z1 )

1 − 2γ 2z2 − 1 z2 /(1 − z2 ) + ln γ(1 − γ) z2 − γ γ/(1 − γ)



The first order condition for this maximization problem yields: (z2 − z1 )2 (z2 − γ)g(z1 , z2 , γ) = 0 where g(z1 , z2 , γ) = −1/k1 + (1 − 2z2 ) ln

z1 /(1 − z1 ) (γ − z1 )(z2 z1 (1 − 2γ) − (1 − γ)z2 + γz1 ) − γ/(1 − γ) z1 (1 − z1 )γ(1 − γ)

Note that the second order condition for a maximum is satisfied at z1 if and only if ∂g(z1 , z2 , γ)/∂z1 < 0. A direct calculation yields ∂g(z1 , z2 , γ) z2 − z1 =− 0. Since z2 > 0 and g(z1 , z2 , γ) = 0 at an optimum, it follows that b > 0. Therefore, −b ∂B1 (z2 ) = >0 ∂z2 ∂g(z1 , z2 , γ)/∂p1 (Differentiability follows from the fact that

∂g(z1 ,z2 ,γ) ∂z1

= 0.)

By the above argument, B1 is continuous and therefore the analogous function B2 is also continuous. Moreover, B1 ∈ [0, γ] and analogously B2 ∈ [γ, 1]. Therefore, (B1 , B2 ) : [0, γ] × [γ, 1] → [γ, 1] × [0, γ] has a fixed point. 39

To show uniqueness, define φ(z1 ) = B1 (B2 (z1 )) We will show that |φ < 1| if z1 = φ(z1 ). Let z1 be a fixed point of φ and let z2 = B2 (z1 ). Then, |φ (z1 )| = |

dB1 dB2 ∂g(z1 , z2 , γ)/∂z1 ∂g(1 − z2 , 1 − z1 , 1 − γ)/∂z2 |=| · | := h z 2 z1 ∂g(z1 , z2 , γ)/∂z2 ∂g(1 − z2 , 1 − z1 , 1 − γ)/∂z1

A direct calculation shows that h= z12 (1 − z1 )2 z22 (1 − z2 )2 (2 ln κ(1 − z2 , 1 − γ) − λ(1 − z2 , 1 − γ)) (2 ln κ(z1 , γ) − λ(z1 , γ)) (z2 − z1 )2 where κ(a, b) =

a/(1−a) b/(1−b) , λ(a, b)

=

(ab−(1−a)(1−b))(b−a) . b(1−b)a(1−a)

Note that − ln x ≤ −1 + 1/x for

x ≤ 1. Further note that ln κ(1−z2 , 1−γ) < 0, ln κ(z1 , γ) < 0, λ(z1 , γ) > 0, λ(1−z2 , 1−γ) > 0. Therefore, substituting −1 + 1/κ(z1 , γ) for ln κ(p1 , γ) and −1 + 1/κ(1 − z2 , γ) for ln κ(1 − z2 , γ) we get an upper bound for h. Following the substitution we are left with the following expression: z1 (1 − z1 )(γ − z1 )z2 (1 − z2 )(z2 − γ)(1 + γ − z1 )(1 + z2 − γ) γ 2 (1 − γ)2 (z2 − z1 )2 z1 (γ − z1 )(1 − z1 )z2 (1 − z2 )(z2 − γ) ≤ 9/4 γ 2 (1 − γ)2 (z2 − z1 )2

h≤

Note that z1 ≤ γ, 1 − z2 ≤ 1 − γ. Choose δ1 , δ2 so that z1 = δ1 γ, 1 − z2 = δ2 (1 − γ) then the above expression simplifies to 9/4

(1 − δ1 γ)(1 − δ2 (1 − γ))δ1 (1 − δ1 )δ2 (1 − δ2 ) (1 − δ1 γ − δ2 (1 − γ))2

with δ ∈ [0, 1], γ ∈ [0, 1]. The above expression is maximal at γ = 1/2, δ1 = δ2 = δ for some δ ∈ (0, 1) and therefore it suffices to show that 9/4

δ 2 (1 − δ)2 (1 − δ/2)2 ≤1 (1 − δ)2

The last equality holds by a direct calculation. 40

6.8

Proof of Lemma 3 Let g be as defined in the proof of Proposition 6. Note that 

−1 dz1 ∂g(z1 , z2 , γ) ∂g(z1 , z2 , γ) =− dγ ∂γ ∂z1

Substituting for g and calculating the above partial derivatives yields z2 − γ z12 (1 − z1 )2 dz1 = 2 dγ γ (1 − γ)2 z2 − z1 and an analogous argument yields γ − z1 z22 (1 − z2 )2 dz2 = 2 dγ γ (1 − γ)2 z2 − z1 Next, note that (zi − 1/2)2 ∂U3 = 2(−1)i zi (z2 − z1 )2 Combining the three displayed equations then yields Lemma 3. 6.9

Proof of Proposition 8

Part (i): First, we show that U (1 − γ) > U (γ) for γ < 1/2. This will prove that γ ∗ ≥ 1/2. Let zi (γ) denote the equilibrium strategy of advocate i in W γ . Fact 1:

Assume γ < 1/2 and k1 < k2 . Then, 1 − z1 (γ) > max{z2 (1 − γ), 1 − z1 (1 − γ)}.

Proof: Note that 1 − z1γ = z2 (1 − γ) if k1 = k2 . Since z1 is increasing in k1 and decreasing in k2 it follows that 1 − z1 (γ) > z2 (1 − γ). 1 − z1 (γ) > 1 − z1 (1 − γ) follows from the fact that z1 is increasing in γ. Fact 2:

Assume γ < 1/2, k1 < k2 and z2 (γ) > 1/2. Then, z2 (1 − γ) − z1 (1 − γ) ≥

z2 (γ) − z1 (γ). Proof: Let ∆+ (x) = z2 (1/2 + x)−z1 (1/2 + x) and let ∆− (x) = z2 (1/2 − x)−z1 (1/2 − x) for x ∈ [0, x ¯] where x ¯ = 1/2 is the largest x ≤ 1/2 such that z2 (1/2 − x) ≥ 1/2. First, we establish that if ∆+ (x) ≤ ∆− (x) then ∂∆+ (x)/∂x > ∆− (x)/∂x. To see this note that from the proof of Lemma 3) z2 − γ z12 (1 − z1 )2 dz1 = 2 dγ γ (1 − γ)2 z2 − z1 41

γ − z1 z22 (1 − z2 )2 dz2 = 2 dγ γ (1 − γ)2 z2 − z1 From Fact 1 and z1 (γ) ≤ 1/2, z2 (γ) > 1/2 we conclude that for γ < 1/2 (z2 (γ) − γ)z1 (γ)2 (1 − z1 (γ))2 < (1 − γ − z1 (1 − γ))z2 (1 − γ)2 (1 − z2 (1 − γ))2 and (z2 (1 − γ) − (1 − γ))z1 (1 − γ)2 (1 − z1 (1 − γ))2 < (γ − z1 (γ))z2 (γ)2 (1 − z2 (γ))2 and hence the assertion follows. Note that ∆+ (0) = ∆− (0). Since ∂∆+ (x)/∂x > ∆− (x)/∂x when ∆+ (x) = ∆− (x) and ∆+ , ∆− are continuously differentiable, we conclude that ∆+ (x) ≥ ∆− (x). To complete the proof of part (i), let a1 := 1 − 2z1 ; a2 := 2z2 − 1. Then, U3 (γ) − U3 (1 − γ) =

a1 (1 − γ)a2 (1 − γ) a1 (γ)a2 (γ) − a1 (γ) + a2 (γ) a1 (1 − γ) + a2 (1 − γ)

Fact 1 implies that a1 (γ) > max{a2 (1 − γ), a1 (1 − γ)} and Fact 2 implies that a1 (1 − γ) + a2 (1 − γ) > a1 (γ) + a2 (γ) It is easy to check that these two inequalities imply that U3 (γ) < U3 (1 − γ) as desired. Part (ii): By part (i) we have γ ∗ ≥ 1/2. If 1/2 − z1∗ > z2∗ − 1/2 then γ − z1 > z2 − γ and hence Lemma 3 implies that U3 is strictly increasing in γ. This proves part (ii). 6.10

Proof of Proposition 9

We first prove this result for x0 ∈ (z ∗ , 0) and then extend it to other initial conditions. Define Vti (ω) to be the continuation payoff of type 0 in a monotone equilibrium in period t after history (Xτ (ω))τ ≤t . Lemma A3:

Vt0 (ω) > 0 for pt (ω) > p(z ∗ ) and Vt1 (ω) > 0 for pt (ω) > p(z ∗ ) − for some

> 0. 42

Proof: A lower bound to player 0’s payoff can be computed by assuming the lower bound on pτ (ω) is attained for all τ ≥ t and player 0 quits whenever pτ = p(z ∗ ). Let z be such that pt (ω) =

1 1 + e−z

and assume (wlog) pt (ω) < 1/2. Then, using the expressions from Lemma 1 (A3) we find that the player’s payoff is Π0 (z) = P (z ∗ − z, −z) (1 − k1 T (z ∗ − z, −z)) where µ = −1/2. Substituting the expressions for P and T it is straightforward to show that Π(z) > 0 for z > z ∗ . The statement for V 1 follows from an analogous argument where µ = 1/2 is substituted to obtain the corresponding expression for Π1 . Lemma A4:

If pt (ω) > p(z ∗ ) then lim∆→0 pt−∆ (ω) > p(z ∗ ).

Proof: Suppose not. Then, there is τ such that pτ (ω) > p¯ and pτ (ω) “jumps up” at τ , i.e., Q0 (ω) has a mass point at τ . Since pτ (ω) > p(z ∗ ) it cannot be optimal for type 0 to quit after ωτ and hence Q(ω) cannot have a mass point at τ yielding the desired contradiction.

Lemma A5:

Let pt (ω) = p(z ∗ ). Then Vt0 (ω) = 0.

Proof: Note that when pt (ω) > p(z ∗ ) no player quits. Therefore, p(ω) is continuous at t. Lemma A4 therefore implies that q(ω) = min{p(z ∗ ), p(ω)} is a continuous function of t. ∗

This in turn implies that for Yt = inf τ ≤t Xt and Ytz = min{0, Yt − z ∗ } pt ≤

1 1+



z eYt −Xt

where equality holds for pt ≥ p(z ∗ ). Assume that player 0 never quits but incurs the cost of information only if pt ∈ (p, 1/2] with p =

1 1+e−z∗

. Clearly, this strategy provides an

upper bound to the payoff of player 0. Denote the corresponding payoff by W ∗ . 43

We can compute an upper bound to W ∗ by assuming that when p(z ∗ ) is reached then information is provided at no cost to the players until a pt =

1 1+e−(z∗ +δ)

is reached. Let

Zt = sup0≤s≤t (Xt − Xs ). Then, pt ≤ and the probability that pt >

1

1+e

−(z ∗ +δ) 0

1 1+

∗ e−z0 −Zt

for some t is bounded above by e−δ since Pr{Zt ≤

z} → 1 − e−z as t → ∞ (See Harrison, page 15). For z ∈ (z ∗ , 0) let Π(x, z) = P (x − z, −z) − kT (x − z, −z) and note that Π(·, z) has a unique maximum at z ∗ (independent of z). Let V δ denote the continuation value of player 0 at pt =

1 1+e−(z∗ +δ)

when the player quits at p(z ∗ ) =

1 1+e−z∗

and note that Vt0 (ω) ≤

1 (Π(z ∗ + δ, z ∗ )) ∗ 1 − P (δ, −z − δ)

Note that Π0 (z ∗ + δ, z ∗ ) − Π0 (z ∗ , z ∗ ) Π0 (z ∗ + δ, z ∗ ) − Π0 (z ∗ + δ, z ∗ + δ) = lim =0 δ→0 δ→0 1 − e−δ 1 − e−δ lim

since ∂Π0 (x, z)/∂z = 0 at z = z ∗ uniformly for all x by the optimality of z ∗ . Lemma A6:

pt ≥ p(z ∗ ) a.s.

Proof: Suppose Pr{pt < p(z ∗ ) − } > 0 for some > 0. Then, since pt is upper semicontinuous there exists some ω, t such that pt (ω) = p(z ∗ )− /2 and Pr{pt < p(z ∗ )− |ωt } > 0. By Lemma A4, the stochastic process pt cannot jump over p(z ∗ ) and hence given pt < p(z ∗ ) either stays below p(z ∗ ) or reaches p(z ∗ ) at some future date. Note also that by construction the expected time until pτ = p(z ∗ ) given {Xτ }τ ≤t is strictly greater than zero. Lemma A5 now implies that the expected payoff of type 0 must be negative and hence type 0 must quit. Note however, that the expected payoff of type 1 is strictly positive (for sufficiently small) and hence type 1 cannot quit; this in turn implies that pt (ω) = 1 contradicting the initial hypothesis. 44

Proof of Proposition 10: Let Yt = inf τ 0. If p∗ > 0 this implies that there is some > 0 such that for 0

π≥

p∗0

0

− player 1 has a strict incentive not to quit. To see this, let π/p∗0 = e−δ and let

Zt = supτ ≤t Xτ . Since beliefs are monotone, the payoff of player 1 from not quitting is bounded below by Pr{Zδ > δ|µ = 1/2} · Vˆ1 − k1 δ Since Pr{Zδ > δ|µ = 1/2} → 1 as δ → 0 the claim follows. But this implies that, in equilibrium, player 0 cannot quit at π = p∗0 and hence at date 0, pt must jump to p(z ∗ ). It follows that p∗0 = 0 as desired. 6.11

Proof of Proposition 12

Proof of Lemma 4: (i) Let p(x), p(y) be the belief thresholds corresponding to the profile Qxy . Note that player 1 type 1 never quits and therefore the probability player 2 wins is v where v satisfies (1 − v) · q + v · 1 = 1/2 This follows since pt is a martingale and hence by the martingale stopping theorem E(pT ) = 1/2 = p0 . Note that v is independent of the strategy of player 0. In particular, we may assume that player 0 never quits and hence v corresponds to the win probability of player 2 if his opponent never quits. 45

Next, consider the cost of the strategy Q2 (y) with p =

1 1+e−y .

We can compute an

upper bound to this cost as in Lemma 1. Let pt = 1/2 and assume player 2 incurs the cost until time τ with pτ ∈ {1/2 − , p}. Choose so that 1/2 − < p =

1 1+e−x .

Let c denote

this cost. Let π denote the probability that pt reaches 1/2 at some time after τ given that pτ = 1/2 − . This probability satisfies (1/2) · π + 1 · (1 − π ) = 1/2 −

Note that π is independent of the strategy of player 0. An upper bound to player 2’s cost is therefore given by ¯ ¯ C(q) = c (q) + π C(q) Note that this upper bound is independent of the strategy of player 0. We may therefore assume that player 0 never quits. For that case, Lemma 1 provides the limiting cost as

→ 0 by setting α1 = 1. An analogous argument can be used to derive a lower bound (as in Lemma 1). Hence, we can apply Lemma 1 to show that the utility of player 2 from strategy y is given by

where

 1 p¯ 1 − kw ln U2 (p) = 2¯ p 1 − p¯

p=1 1+e−y .

By Lemma 2, U2 is strictly concave with a maximum p¯ =

1+B(1) . 2

Hence, for q < p¯,

p) − U2 (q) > 0. Assume player 2 quits if and only if pt ≥ p¯. Let V (q) denote the U2 (¯ player’s continuation payoff conditional on a history with pt (ω) = q. Note that p) − U2 (q) π q V (q) = U2 (¯ where π q is the probability that pt reaches q before it terminates. Hence, for any ω, t with pt (ω) < p¯ there is a strategy for player 2 that guarantees a strictly positive payoff. This implies that player 2 quits with probability zero for all t, ω with pt (ω) < p¯. Next, we show that player 2 must quit at p¯. Suppose player 2 never quits but incurs the cost only if pt ≤ p¯. For q > p¯ the cost is zero. Clearly, this is an infeasible strategy (because 2 does not incur the cost of information provision for pt > p¯) that provides an 46

upper bound to the utility of any strategy that does not quit at p¯. Let W ∗ be the utility of player 2 at this strategy. We will show below that W ∗ = U 2 (¯ p∗ ). This implies that the overall cost incurred by player 2 cannot exceed the cost of the threshold strategy x∗ and therefore Q2 (x∗ , y ∗ ) is the unique best reply. p). Claim: W ∗ = U 2 (¯ Proof: Let V (q) =

U2 (p)−U ¯ 2 (q) πq

for 1/2 ≤ q < p¯ where π q is the probability that pt reaches

q. Note that V (q) is the continuation value of the strategy Q(x, y ∗ ) at pt = q. Furthermore, π q is bounded away from zero for all q ∈ [1/2, q ∗ ]. Consider the following upper bound for W ∗ . If pt = p¯ then information is generated at no cost until either pτ = p¯ − δ (which occurs with probability r or pτ = 1 (which occurs with probability 1 − r). In the latter case, the player 2 wins. In the former case, the agent proceeds with the threshold strategy p¯ until either p¯ is reached or the opponent quits. If p¯ is reached then free information is generated again, as described above. By the martingale property of pt we have r(¯ p − δ) + (1 − x)1 = p¯ which implies that r=

1 − p¯ 1 − p¯ + δ

and therefore p) − U2 (q) U 2 (¯ q π (1 − rδ ) U 2 (¯ p) − U2 (¯ p − δ) p) + π q (1 − q + δ) = U 2 (¯ δ Note that as δ → 0 we have W ∗ ≤ U 2 (¯ p) +

U 2 (¯ p) − U 2 (¯ p − δ) p) = 0 → U2 (¯ δ by the optimality of the threshold p¯ and hence the claim follows. It follows that the player must quit when pt > p¯. 47

Proof of Lemma 5: It is straightforward to show that Π(x, y ∗ ) is concave for x < 0 and y ∗ > 0 and hence has a unique maximum x∗ . To see that x∗ < 0 for all k1 > 0 and all y ∗ > 0 note that ∂

˜ 0 (x, y ∗ ) 1 Π |x=0 = − y∗ ∂x e −1

and hence x∗ = 0 cannot be optimal for any k1 . Proof of Proposition 12 Let P (x, y) denote the probability of reaching y before x when µ = −1/2 and let E[C(Xt )| − 1/2] denote the cost, as defined in Lemma 1, when y1 = x, y2 = y ∗ and µ = −1/2. Lemma 1 provides expressions for P (x, y) and E[C(Xt )| − 1/2]. Substituting those we obtain P (x, y ∗ ) − E[C(Xt )| − 1/2)] = Π(x, y ∗ ) Since Π(x, y ∗ ) > 0 for x > x∗ it is optimal for player 0 not to quit for p > p(x∗ ). That quitting at p(x∗ ) is optimal follows from an analogous argument as the one given in Lemma A5 which we provide for completeness. Assume that player 0 never quits but incurs the cost of information only if pt ∈ (p(x∗ ), 1/2]. Clearly, this strategy provides an upper bound to the payoff of player 1. Denote the corresponding payoff by W ∗ . We can compute an upper bound to W ∗ by assuming that when x∗ is reached then information is provided at no cost until a pt = let p0 =

1 1+e−x∗

1 1+e−(x∗ +δ)

is reached. Let Zt = sup0≤s≤t (Xt − Xs ) and

. Then, pt ≤

and the probability that pt >

1

1+e

−(y ∗ +δ) 0

1 1+

∗ e−y0 +Zt

for all t is bounded below by e−δ since Pr{Zt ≤

z} → 1 − e−z as t → ∞ (See Harrison, page 15). Hence, the probability that pt = is at most e−δ . Let V δ denote the continuation value of player 0 at pt = the player quits at p = Vδ =

1 1+e−x∗

and note that

  0 ∗ ∗ 1 Π (x , y ) − Π0 (x∗ + δ, y ∗ ) ∗ ∗ 1 − P (x + δ, y )

Then, Vδ W = Π (x , y ) + 1 − e−δ ∗

0





48

1

1+e

1 1+e−(x∗ +δ)

−y ∗ +δ 0

when

Note that Π0 (x∗ , y ∗ ) − Π0 (x∗ + δ, y ∗ ) =0 δ→0 1 − e−δ lim

since ∂Π0 (x∗ , y ∗ )/∂x = 0 by the optimality of x∗ . Therefore, W ∗ → Π0 ∗ (x∗ , y ∗ ) as δ → 0 This shows that quitting at x∗ is optimal. The argument above also shows that V δ → 0 and hence player 0 is indifferent between quitting and not quitting at pt = p(x∗ ). It remains to show that the strategy Q1 is optimal. Let Π1 (x, y ∗ ) :=

  1 x −y ∗ x − 2k (1 − e )(1 − e (1 − x)) 1 − e 1 1 − e−y∗ +x

and note that Π1 is the payoff of type 1 if 1 quits at p(x). A straightforward calculation shows that ∂Π1 (x, y) ∂Π0 (x, y) −