Probabilities, utility and decisions
(decision under risk and uncertainty)
Ariel M. Viale contribution (Aug.-Sept. 2000)
Choice Under Uncertainty, Where We Are, where To Go?
"This is just a quick note to bring up the subject of utility theory,
a subject which you either love or hate.
In my case I used to hate it, now I love it."
Paul Wilmott, Derivatives Memo, June 2000.
"In fact preferences do enter the Black-Scholes formula, but in a subtle and indirect
In particular, the assumption that the underlying asset's price dynamics are governed
by a particular stochastic process - typically, geometric Brownian motion - restricts
the type of preferences that are possible."
Andre W. Lo, The three p's of Total Risk Management , Feb. 1999.
Part A: Types of probabilities
A1) Objective Probabilities (also called "statistical" or "aleatory")
Objective probabilities are based on the notion of relative frequencies in repeated
experiments (e.g. coin tosses, rolls of the dice) or past statistics.
They have clear empirical origins and depends on the nature of the experiment, not on
the characteristic of the experimenter, hence the term "objective".
A2) Subjective Probabilities (also called "personal" or "epistemic")
We can imagine an individual possessing a certain level of conviction about the
likelihood of the event.
The level of conviction can be interpreted as a kind of probability.
It is a subjective one that can differ from one individual to another.
This "degree of belief" needs not be based on statistical phenomena such as repeated
coin tosses (we cannot conduct repeated trials of the event).
Despite its individualistic nature, subjective probabilities must obey the same
mathematical laws as objective probabilities, otherwise arbitrage opportunities will arise.
A3) Conditional Probabilities and Bayes Law
Estimates of probability of events may be continually revised in the light of new
This updating takes place when new information becomes available.
This belief revision fits a formula named Bayes' Law (Thomas Bayes (1702-61), English
probabilistic and theologian).
The probability -p- of some proposition
after receiving information that some proposition -q- has occurred
is its conditional probability.
It is written as Pr(p|q) and read as the probability of -p- given that -q- has occurred.
A4) The Peso Problem Anomaly (rare events in finance)
Ex-ante "irrational behavior" evidences have been found in financial markets.
For example, the expectations hypothesis fails to explain the term structure behavior of
interest rates. But a rational answer could still be given under economic theory, through
the small sample inference's issue known as "Peso Problem&" (termed after the
Tequila's crises of the Mexican peso).
Narrowly defined, a peso problem arise when the distribution due to the data generating
process includes a low probability, usually catastrophic, of a state that brings extreme
disutility to economic agents (Rietz's explanation of the equity premium puzzle,
Here is the problem :
As this state has a low probability, it is unlikely to be observed in a given small
sample of data.
But, being catastrophic, the possibility that it may occur substantially affects
individuals' decisions, that determines equilibrium prices and rates of return.
In a broader sense, it is defined as arising whenever:
the ex-post frequencies of states within the data sample differ substantially
from their ex ante probabilities,
and these deviations distort econometric inference.
When a peso problem is present, the sample moments calculated from the available
data do not coincide with the population moments that individuals actually use when
making their decisions.
Part B : The economic aspect:
Utility and preferences
B1) The Expected Value Model
Blaise Pascal and Pierre de Fermat assumed in the 17th century that the attractiveness
of a gamble with given probabilities was its "expected value".
Which means the sum of expected gains / losses weighed by their probabilities..
B2) The Expected Utility Model
Nicholas Bernoulli (1728) proposed the expected utility hypothesis as an alternative to
the above earlier, more restrictive, theory of risk-bearing.
He demonstrated that individuals consider more than just expected value
(The St. Petersburg Paradox, see C2).
Gabriel Cramer and Daniel Bernoulli solved the paradox, hypothesizing that each
possesses what we now know as its
"Von Neumann-Morgenstern" utility function U(x).
evaluates gambles on the basis of its "Expected Utility" rather than its expected
As a theory of individual behavior, the expected utility model shares many of the
underlying assumptions of standard consumer theory:
(a) The objects of choice can be unambiguously and objectively described;
(b) Situations which ultimately imply the same set of availabilities will lead to
the same choice;
(c) The individual is able to perform the mathematical operations necessary to
determine the set of availabilities;
(d) Preferences are "transitive".
But the Von Neumann-Morgestern utility function is quite distinct from the ordinal
utility function of standard consumer theory: it is "cardinal".
It's shape determines risk attitudes and its' monotonocity reflects the property of
stochastic dominance preference, the equivalent of "more is better".
The key property under the hypothesis, encompassing all these assumptions is:
"linearity in the probabilities"; (the independence axiom).
B3) Non-expected utility preferences (see also Part C)
The paradoxes shown in part C entail violations of linearity in the probabilities.
One of the earliest and best-known examples of systematic violation of that axiom that
supports the expected utility theory is the Allais (1953/1979) paradox: indifference
curves are not parallel but rather fan out.
The common ratio effect is a second class of systematic violation.
These violations of the linearity in the probabilities have led researchers to "generalize"
the expected utility model with non-linear functional forms for the individual
preference function: Edwards(1955), Kahneman & Tversky (1979), Karmarkar
(1978), Chew (1983), Fishburn (1983), Machina (1982), and Hey (1984).
Many of these "forms"
are flexible enough to exhibit the properties of stochastic dominance preference,
risk aversion/risk preference and fanning out,
have proven to be useful both theoretically and empirically.
Such forms allow for the modeling of preferences in a more general way than the
expected utility model hypothesis. But each of them require a different set of conditions
on its component functions.
Do these non-expected utility preferences requires us to abandon the vast body of
theoretical results and intuition that have been developed within the expected utility
The answer is no. An alternative approach to analyze non-expected utility preferences
does not adopt a "specific" non-linear form, but rather:
considers non-linear functions "in general",
use calculus to extend results from the expected utility theory.
Researchers have shown how to apply such techniques to extend the expected utility
theory results to the case of non-expected utility preferences, so as to conduct new
and more general analyses of economic behavior under uncertainty.
But these models do not provide solutions to the following phenomena:
B4) The Preference Reversal Phenomena
The finding known as the "preference reversal phenomena" was first reported by
psychologists Slovic and Lichtenstein (1971).
Individuals that are first presented with a number of pairs of bets and asked to choose
one bet out of each pair show a systematic tendency to violate the expected utility model
and each of the non-expected utility models predictions (higher certainty choice).
The phenomenon has been found to persist (but in mitigated form) even when subjects
are allowed to engage in experimental market transactions involving gambles (Knez and
Smith, 1986), or when the experimenter can act as an arbitrageur and make money from
such reversals (Berg, Dickhaut and O'Brien, 1983).
An economist would explain these cyclic or intransitive preferences by the
each individual possesses a well-defined preference relation over lotteries,
information about this relation can be gleaned from either direct choice
questions or valuation questions.
Psychologists would deny the premise of a common mechanism generating both
choice and valuation behavior. Rather, they view choice and valuation as distinct
processes, subject to possibly different influences. In other words, individuals
exhibit "response mode effects".
This issue is not new to economics. Thus, economists have begun to develop, and
analyze, models of "non-transitive preferences" over lotteries. The leading example
is the "expected regret model", developed independently by Bell (1982), and Loomes
& Sugden (1982).
B5) The Expected Regret Model
In this model of pair-wise choice, the Von Neumann- Morgenstern utility function
-U(x)- is replaced by a "regret/rejoice" function -r(x,y)-.
This function shows the level of satisfaction (or if negative, dissatisfaction) the individual
would experience if (s)he were to receive the outcome -x- when the alternative choice
would have yielded the outcome -y- (this function is assumed to satisfy r(x,y) = -r(x,y).
The expected utility model is a special reduced form of this more general model
(r(x,y) = U(x)-U(y)).
B6) Framing Effects
What if it turns out that "framing" or psychologists' "response mode effects" is a real-
world phenomenon of economic relevance?
For example, the practice or method for displaying unit-price information affects both
the level and distribution of individual choices.
More generally, the answer to a question is strongly influenced by the way the
question is formulated.
What if, in particular, individuals' frames cannot always be observed?
Again, the answer is in the "tool box" of economic analysis.
If the frame of a decision problem can at least be independently observed, its treatment
can be supported with the "uninformative advertising" concept.
This last term is hard to define formally, as economic theory (experimental economists)
are now developing an alternative theoretical construction (more general to be precise)
for consumer theory.
In few words, we can quantify and treat the "frame variable" as an additional
independent variable in the utility and/or demand function (the traditional one).
The standard results, like the Slutsky equation, need not to be abandoned, but rather
reinterpreted as properties of demand functions "holding this new variable constant".
In cases when decision frames can be observed, framing effects could
presumably be modeled in an analogous manner.
We would first try to quantify, or at least categorize, frames.
The second step would be to study the effect of this new variable, and
conversely, to retest standard economic theories under the new variable.
The next step would be to ask "who determines the frame ?".
if it is the firm, its effects can be incorporated into the firm's maximization
it is more difficult if an individual chooses the frame (i.e. with a reference
point) and this choice cannot be observed.
In other words, this assume that individuals make choices as part of a "joint
maximization problem", with a component (the choice of frame or reference
point) that cannot be observed. In this case the "Theory of induced preferences"
can help to derive testable implications on the individual's choice (Milne, 1981,
and Machina, 1984).
All that we said does "not" imply that we have already solved the problems found in
the framework of "choice under uncertainty", or that there is no need to adapt and, if
needed, abandon, existing models.
Rather, it reflects the view that when psychologists have collected enough evidence on
how these effects operate, economists will be able to respond accordingly.
B7) Is the probability theory relevant?
All the evidence discussed dealt with individuals presented with explicit or "objective"
probabilities as part of their decision problems. The models shown were thus defined
over objective probability distributions.
The Behavioral Finance and Psychological literature is now large enough in presenting
evidence of individuals' systematic mistakes in revising probabilities.
Kahneman & Tversky (1973), Bar-Hillel (1974) and Grether (1980) have found
that "probability updating" systematically departs from Bayes's law, in the
underweighting prior information
overweighting the "representativeness" of current data.
Kahneman & Tversky (1971) termed the "The law of small numbers"
phenomenon, where individuals overestimate the probability of drawing a
perfectly representative sample out of a heterogeneous population.
Although all the evidences indicate that individuals asked to formulate
probabilities do not do it correctly, these findings may be rendered moot by
evidence suggesting that when individuals are
making decisions under uncertainty
but are "not" explicitly asked to form subjective probabilities, they might not
do it (or even act as doing it) "at all": The Ellsberg Paradox.
In the case of "complete" ignorance regarding probabilities, Arrow & Hurwicz
(1972), Maskin (1979) and others have presented axioms that imply principles
such as ranking options solely on the basis of their worst and/or best outcomes
(e.g. maximin, maximax), and the unweighted average of their outcomes
("principle of insufficient reason").
Schmeidler (1986) and Segal (1987) developed generalizations of expected
utility theory that drop the standard additivity and/or compounding laws of
A well known approach for the analysis of behavior under uncertainty without
using probabilities at all is the Arrow (1953/1964 "state-preference" model.
Uncertainty is represented there by a set of mutually exclusive and exhaustive
"states of nature".
Part C : Risk management
and decision paradoxes
C1) The three P's of risk management
Among the paradoxes in part C, the Ellsberg one will illustrate succinctly the importance
of all three P's of risk management:
how much one is willing to pay for each gamble (prices),
the odds of drawing red or black (probabilities),
which gamble to take and why (preferences).
C2) Risk & Uncertainty
The Ellsberg paradox will also suggests individuals have a preference regarding the
uncertainty of risk. Well, this statement may seem circular (Roget's International
Thesaurus lists risk and uncertainty as synonyms). So, better recall Knight's (1921)
distinction between risk and uncertainty:
Risk is the kind of randomness that can be modeled by quantitative methods (e.g.,
mortality rates, casino gambling, equipment failure rates).
The rest hazards (= that are not quantified) is uncertainty.
Knight used this distinction to explain the seemingly disproportionate profits that accrue
to entrepreneurs (they bear uncertainty which, according to Knight's theory, carries a
much greater reward than simply bearing risk).
But it also has significant implications for risk management.
C3) St. Petersburg Paradox
A fair coin is tossed until it comes up heads, at which point the individual is paid a prize
of $2^k, where -k- is the number of times the coin is tossed. Because the probability of
tossing heads for the first time on the kth flip is 1/2^k, the expected value of this gamble
Yet individuals are typically only willing to pay between $2 and $4 to play, hence the
Bernoulli (1738) resolved it by asserting that gamblers do not focus on the expected gain
of a wager, but, rather, on the expected logarithm of the gain (the value in use or the
gambler's expected utility).
C4) Allais Paradox "Fanning Out"
Consider choosing between two alternatives, A1 and A2, where:
Sure gain of $ 1,000,000
$ 5,000,000 with probability 0.10
$ 1,000,000 with probability 0.89
$ 0 with probability 0.01
Now consider the following two alternatives B1 and B2, where:
$ 5,000,000 with probability 0.10
$ 0 with probability 0.90
$ 1,000,000 with probability 0.11
$ 0 with probability 0.89
If, like most individuals presented with these two binary choices, you chose A1 and B1,
your choices are inconsistent with expected utility theory.
A preference of A1 over A2 implies that the expected utility of A1 is strictly larger
than that of A2.
Similarly a preference for B1 over B2, this last choice contradicts the first one (the
indifference curves are not parallel but rather "fan out").
To be consistent with expected utility theory,
A1 is preferred to A2, if and only if B2 is preferred to B1.
As seen below in C6, Kahneman and Tversky (1979) solved the puzzle with an
alternative theory for expected utility theory: the prospect theory.
C5) Common Ratio Effect
Consider the phenomena that involves pairs of prospects of the form:
p chance of $X
1- p chance of $0
q chance of $Y
1-q chance of $0
rp chance of $X
1-rp chance of $0
rq chance of $Y
1-rq chance of $0
Where p > q, 0<X<Y and -r- lies between 0 and 1.
Studies found a tendency for choices that depart from the predictions which arise from
the expected utility theory, in the direction of preferring C1 and C4.
This again suggests that indifference curves fan out.
C6) Prospect Theory
Individuals focus more on "prospects" - gains and losses - than on total wealth, and the
"reference point" from which gains and losses are calculated can change over time.
Moreover, they view gains quite differently from losses.
They are risk averse when it comes to gains and risk seeking when it comes to losses.
For example, consider choosing between the two gambles below:
Sure gain of $240,000
$ 1,000,000 with probability 0.25
$ 0 with probability 0.75
Despite the fact that C2 has a higher expected value than C1, most individuals seem to
gravitate toward the sure gain, a natural display of risk aversion.
But now consider choosing between the following two gambles:
Sure loss: $750,000
Loss $1,000,000 with probability 0.75
$ 0 with probability 0.25
In this case, most individuals choose D2 although it is clearly a riskier alternative than
D1 a choice which meant that the utility function is "abnormally" convex because of the
"certain loss aversion effect".
The apparent asymmetry in preferences could disappear if the alternatives are
presented in a different way to gamblers:
C1 and D2
$ 240,000 with probability 0.25
- $ 760,000 with probability 0.75
C2 and D1
$ 250,000 with probability 0.25
- $ 750,000 with probability 0.75
It is clear now that C2 and D1 dominates C1 and D2; Presented in this way, and without
reference to any auxiliary conditions or information, no rational individual would choose
C1 and D2 over C2 and D1.
One objection to this conclusion is that the two binary choices were offered
C7) Ellsberg Paradox
Two statistically equivalent gambles seem to be viewed very differently by the typical
in gamble E1, you have to choose a color, red or black, and then to draw a single
ball from an urn containing 100 balls, 50 red and 50 black. If you draw a ball of your
color, you receive $10,000 prize, otherwise you received nothing.
the terms of gamble E2 are identical except you draw a ball from a different urn,
one containing 100 red and black balls but in unknown proportion.
both gambles cost the same - say $5,000 - and you must choose one,
which would you choose?
For most of us, gamble E2 appears significantly less attractive than E1, despite the fact
that the probability of picking either color is identical in both gambles: 0.50.
This is not to say that individuals who express a preference for E1 are irrational, but
rather that they must be incorporating other information, (hypotheses, biases, or
heuristics) into this decision.
Whether or not it is rational to include such auxiliary considerations in one's decision-
making process depends, of course, on how relevant the material is to the specific
context in which the decision is to be made.
As no single decision rule can be optimal for all circumstances, it should come as no
surprise that learned responses that are nearly optimal in one context can be far from
optimal in another.
The Ellsberg paradox suggests individuals have a preference regarding the uncertainty
of risk (see above the definition of risk / uncertainty).
It also illustrates succinctly the importance of all three P's of risk management: how
much one is willing to pay for each gamble (prices), the odds of drawing red or black
(probabilities), and which gamble to take and why (preferences).
C8) Economic efficiency / adaptation / evolution
Economic systems allocate scarce resources to our multiple needs by mutating,
adapting, and evolving.
In the end, economic institutions and conventions are merely another set of 4
adaptations that evolution has given us.
Andrew W. Lo "The Three P's of Total Risk Management" 1999.
Mark J. Machina "Choice Under Uncertainty: Problems Solved and Unsolved" 1989.