Corporate lawyers and their clients routinely hire experts to deliver probabilistic forecasts. For instance, they hire credit rating agencies to deliver credit ratings, which effectively are probabilistic forecasts of credit default events. They also hire experts to deliver probabilistic forecasts of economic, legal, and political events, and even weather events. In hiring an expert, however, they face two distinct problems. The first is a moral hazard problem—how to evaluate, or “score,” an expert’s forecasts in a way that incentivizes the expert to honestly report her opinions (and, importantly, does not perversely incentivize the expert to dishonestly report her opinions to game the system). The second is an adverse selection problem—how to distinguish informed experts (genuine experts) from uninformed experts (charlatans).1 The scoring problem was famously solved by Glenn Brier, who proposed a scoring rule that gives the proper incentives.2 The Brier score is essentially the mean squared error of the expert’s forecasts over the evaluation sample. Solutions to the “charlatans” problem, however, have proven harder to come by. When it comes to probabilistic forecasts, it turns out that it is difficult to devise ex ante tests to screen informed experts from uninformed experts. Basically, the difficulty is that a test which is designed to pass a genuine expert with high probability can also be passed by a strategic charlatan with high probability.3 And ex post warranties are generally not effective.4
In a recent article, Alvaro Sandroni proposes a novel contractual solution to the charlatans problem. More specifically, Sandroni shows that it is possible to write a contract that incentivizes a genuine expert to honesty report her informed opinion and, at the same time, incentivizes a charlatan to “do no harm,” i.e., not report a misleading, uninformed opinion. What’s more, the contract is simple and enforceable, as it makes the expert’s fee contingent on two observable and verifiable facts: the expert’s opinion and the outcome of the event. That said, the contract’s ability to provide the correct incentives depends on a key assumption about the behavior of charlatans, which may or may not hold in reality.
The following example illuminates Sandroni’s elegant and important result.
Alcoa seeks expert advice on the probability that the Internal Revenue Service (IRS) will disallow a deduction on its income tax return. Alcoa has a prior belief about the true probability, but it wants an expert opinion. That is, Alcoa wishes to hire an expert to report her opinion about the true probability.
Alcoa knows that there are two types of experts out there—informed experts (genuine experts), who know the true probability that the IRS will disallow the deduction, and uninformed experts (charlatans), who do not. However, Alcoa cannot distinguish between genuine experts and charlatans. Nevertheless, Alcoa can write a contract that (i) induces the expert to honestly report her opinion if she is informed (so that Alcoa gets the benefit of her expertise if she is a genuine expert), but (ii) induces the expert to report Alcoa’s prior belief if she is uninformed (so that Alcoa is not misled into altering its prior belief by a charlatan). The key assumption is that an informed expert seeks to maximize the expected value of the contract, whereas an uninformed expert seeks to maximize the minimum expected value of the contract. I will say more about the “maxmin” assumption for charlatans later.
The contract makes the expert’s fee contingent on two observable and verifiable facts: (i) her reported opinion and (ii) the eventual outcome (i.e., whether the IRS ultimately disallows Alcoa’s deduction). Specifically, the contract provides that the expert’s fee is the sum of two components: (i) a fixed component and (ii) a contingent component equal to the difference between (a) the Brier score of Alcoa’s prior belief and (b) the Brier score of the expert’s reported opinion.
With this fee structure, the contract provides the correct incentives to genuine experts and charlatans.
First, the expected value of the contract is maximized when the expert reports the true probability. This is because reporting the true probability maximizes the Brier score of the expert’s reported opinion, and because the other parts of the expert’s fee—the fixed component and the Brier score of Alcoa’s prior belief—do not vary with the expert’s report. Thus, the contract incentivizes a genuine expert, who knows the true probability, to honestly report her informed opinion.
Second, the minimum expected value of the contract is maximized when the expert reports Alcoa’s prior belief. If the expert reports Alcoa’s prior belief, the contingent component of the expert’s fee equals zero, and the value of the contract just equals the fixed component. However, if the expert reports anything else, the minimum expected value of the contract is less than the fixed component. This is because in the worst-case scenario—essentially, when the expert’s reported opinion is farther than Alcoa’s prior belief from the true probability—the expected value of the contingent component is negative. This is the key insight behind Sandroni’s result. Hence, the contract incentivizes a charlatan, who does not know the true probability, to “do no harm” by simply reporting back Alcoa’s prior belief.
Of course, Sandroni proves the result at a higher level of abstraction in a more general framework. Importantly, he proves that the result holds for probability distributions over any finite set of outcomes (not just binary outcomes) and for any proper scoring rule (not just the Brier score).
Perhaps the strongest assumption underlying Sandroni’s result is the maxmin assumption for charlatans—i.e., the assumption that an uninformed expert seeks to maximize the minimum expected value of the contract—which is tantamount to extreme risk aversion in an expected utility framework.5 It must be said, however, that the maxmin criterion is a deeply rooted idea in decision theory. Abraham Wald developed it as a solution of a statistical decision problem when a prior probability distribution is unknown.6 John Rawls invoked it as part of a normative theory of justice.7 Itzhak Gilboa and David Schmeidler proposed it as a model of choice under uncertainty when the decision maker is uncertainty averse.8 In the end, I tend to agree with Kevin Bryan, who had this to say about the maxmin assumption in a blog post about Sandroni’s article: “I wouldn’t worry too much about the [maxmin] assumption, since it makes quite a bit of sense as a utility function for a charlatan that must make a decision what to announce under a complete veil of ignorance about nature’s true distribution.”
- This is a version of the famous “lemons” problem. George A. Akerlof,The Market for “Lemons”: Quality Uncertainty and the Market Mechanism, 84 Q. J. Econ. 488 (1970). [↩]
- Glenn W. Brier, Verification of Forecasts Expressed in Terms of Probability, 78 Monthly Weather Rev. 1 (1950). [↩]
- See Wojciech Olszewski, Calibration and Expert Testing, in Handbook of Game Theory, Volume 4 (Peyton Young & Shmuel Zamir eds., forthcoming 2014). [↩]
- See Ronald J. Gilson & Reinier H. Kraakman, The Mechanisms of Market Efficiency, 70 Va. L. Rev. 549, 597 (1984). After all, how do you prove that a (non-degenerate) probabilistic forecast was incorrect? [↩]
- See John Rawls, Some Reasons for the Maximin Criterion, 64 Am. Econ. Rev. 141, 143 (1974), citing Kenneth J. Arrow, Some Ordinalist-Utilitarian Notes on Rawls’s Theory of Justice, 70 J. Phil. 245, 256-257 (1973). [↩]
- Abraham Wald, Statistical Decision Functions (1950). [↩]
- John Rawls, A Theory of Justice (1971). [↩]
- Itzhak Gilboa & David Schmeidler, Maxmin Expected Utility with Non-Unique Prior, 18 J. Mathematical Econ. 141 (1989). [↩]