Utility Theory and its uses (maybe)
|
One often attempts to find some optimal strategy by varying certain parameters.
- Perhaps the problem is to find the optimal allocation of assets between stocks and bonds,
given the Mean and Volatility of each asset.
- Perhaps it's to decide the optimal rebalancing period
(6 months? 12 months? 18 months?), given the characteristics of the individual assets.
- Perhaps it's to find the optimal ...
>Can you just give us an example?
Sure. Suppose we want to find the optimal allocation of money and we have two choices,
namely two stocks whose
annual Gain Factors have averaged G1 and G2 respectively
(where, for a 12% return we put G = 1.12).
>Gain Factor? Explain, please.
I mean that, if you invested $1.00 in a stock which has a Gain Factor of 1.12 then you'd
expect to have $1.12 after one year.
Okay. Suppose:
- Stock #1 will give us $G1 = $2.00 (after one year) with a
probability of p = 0.75 (meaning a 75% probability) ... and 25% of the time you'd get nothing.
- Stock #2 will give us $G2 = $3.00 with a probability of q = 0.40
so 40% of the time you'd get $3.00 and 60% of the time you'd get nothing.
- What stock would you choose, to invest your $1.00?
>Uh ... I have no idea. Besides, it's ridiculous to get nothing and ...
Pay attention.
- If you invested in stock #1, the Expected Value of your $1.00, after one year, is
p G1 = 0.75 x 2.00 = $1.50
- If you invested in stock #2, the Expected Value of your $1.00 is
q G2 = 0.40 x 3.00 = $1.20
>Probability x Value? Is that how you do it?
Yes, it's the "Expected Value", meaning, if you did this 1000 times and there was a
75% chance of winning $2.00, then you'd expect to win 750 times x $2.00 which gives an
average win of 750 x $2.00 / 1000 or 0.75 x 2.00 and that's ...
>That's Probability x Value. Okay, so I invest in stock #1, right?
If you do, then you're maximizing the
Expected Value of your portfolio. That seems a
reasonable thing to do, eh?
However, let's consider another problem and see what you'd do, using
this "Maximize the Expected Value" ritual in order to
decide on a strategy:
You have a choice:
- You can receive $1,500 guaranteed - that's 100% probability.
- You can receive $4,000 with a probability of 25%.
Which do you choose?
>Let me do it:
Expected Value for choice #1 is $1,500, because it's guaranteed.
Expected Value for choice #2 is 25% x $4,000 or $1,000.
I choose #1.
Very good. Now here's another problem:
- You can receive $1,000 with probability of 95%.
- You can receive $3,000 with a probability of 40%.
Which do you choose?
>Expected Values are 0.95 x $1000 = $950 and
0.40 x $3,000 or $1,200, so I choose #2.
Really? Would you?
>Well, in real life, I'd actually choose #1 because it's
almost guaranteed, but the correct answer, I'm sure, is #2 because the
Expected Value is greater, eh?
Okay, now our last problem considered by one of the
Bernoulli clan:
We toss a coin n times. As soon as it comes up heads, the game ends.
If it comes up heads on the 1st toss, you get $21 = $2.00
If it comes up heads after 2 tosses, you get $22 = $4.00
If it comes up heads after 3 tosses, you get $23 = $8.00 etc. etc.
Remember. The payoff keeps doubling, but as soon as it comes up heads, the game stops.
What's the Expected Value of your winnings?
>Uh ... I give up.
- The 1-toss value is $2.00 and the probability of ending after 1 toss is 1/2
so the Expected Value after 1 toss is "Probability x Value" = (1/2)$2 = $1.00
- The 2-toss value is $4.00 and the probability of ending after 2 tosses is 1/4
so the Expected Value after 2 tosses is "Probability x Value" = (1/4)$4 = $1.00
- The 3-toss value is $8.00 and the probability of ending after 3 tosses is 1/8
so the Expected Value after 3 tosses is "Probability x Value" = (1/8)$8 = $1.00
>Okay, so the expected winnings is ... uh ... it's ...
It's the sum of all these Expected Values, namely $1.00 + $1.00 + $1.00 + etc.
>If I'm allowed n tosses, the Expected Value is $n, right?
Yes, so suppose you're allowed ten thousand tosses. Would you pay $10,000 for the privilege to
play this game, realizing that your expected winnings are $10,000?
>Are you kidding?
But if you had a run of forty, you'd win $240 and that's over a trillion dollars!
>Forget it!
Okay. Would you pay $500?
>No way!
So what would you pay?
>I dunno. Maybe $25.
Suppose we replace the payoffs of $2, $4, $8, etc. by their
Utility, namely U(2), U(4), U(8), etc.
so that the sum: Probability x Value is replaced by the sum:
Probability x Utility and ...
>What!?
Pay attention. We get (1/2)U(2) + (1/4)U(4) + (1/8)U(8) + ...
>What's that Utility guy?
We'll assume that U (x) =
α log(x) with α = 20 so the sum ...
>20? Why 20?
You can pick your own number, but one often uses Utility Theory to compare two or
more possible choices so the actual numbers aren't as important as their ratios. Is the
utility of this greater than the utility of that? Anyway, using 20, the sum is:
20{
(1/2) log(2) + (1/22) log(22) + (1/23) log(23) + ...
} =
20 Σ(1/2n) log(2n)
= $27.73
so, would you pay $27.73?
>I assume your talking loge, but yeah, sure. I'd pay $27.73 and, in fact, I'd go for $30 and maybe even ...
Well, then your personal bias is consistent with what Bernoulli said, in the 18th century.
Figure 1
|
Bernoulli said that real people wouldn't consider the
Expected Value but are more likely to attach a value which is
the logarithm of the Expected Value. This idea of using some alternative to the
Expected Value (in Bernoulli's case, it's the logarithm) is the basis of
Utility Theory. This theory would attach a personalized bias
to the calculation of "Value". It was an idea which contrasted with the notion of, say,
Fermat who, in the 17th century, assumed a gambler
would choose the option which maximized the Expected Value.
>I get it! Forget statistical "Expected Value" and give me the logarithm!
The logarithm of the Expected Value is just one example of a Utility Function.
You may have some other personalization, some other definition of optimal or best choice.
|
>Why St. Petersberg, in Figure 1?
That's the name of the problem: the St. Petersburg Paradox or, if you prefer, the
St. Petersberg Paradox. (A paradox because the
Expected Return is infinite yet no rational person would pay a fortune to play the game).
It was posed by Nicholas Bernoulli
and solved by his cousin Daniel Bernoulli who introduced this logarithmic
Utility function. Indeed, Daniel wanted a diminishing
marginal utility, a function which increased less and less rapidly (the graph is concave
DOWN, like the graph in Figure 2) such that the infinite series would converge to a
finite value. We could choose another Utility Function, but we'd want to guarantee a finite sum:
Σp(xn) U(xn)
where p(x) is the probability of the outcome "x",
and U(x) the Utility of "x".
| Figure 2
|
For example, you could choose the square root: U(x) = SQRT(x) which is also concave down.
The amount you'd pay (to play the Petersburg Game) would then be:
(1/2)sqrt(2) + (1/22)sqrt(22) + (1/23)sqrt(23) + ...
which (as an infinite sum) adds to about $2.40 so ...
>I'd pay $25 to $30. I already said that.
So, pick your own personalized Utility!
Besides, we're trying to explain human behaviour as it relates to risk and return. One is
unwilling to pay a great sum for the chance to win a HUGE sum if the chances are slim.
Lotteries which pay millions don't charge hundreds of dollars. So, it's not the amount
of money that's important, it's our perception ... it's the Utility
we attach to the money. It's kind of like the dollar value of a car. As the dollar value goes
up, the quality goes up. You get what you pay for ... more expensive, better built ... but a
$100K car doesn't have twice the quality of a $50K car so the quality doesn't go up as fast as
the dollar value. We could attach a Quality value as a function of the dollar value,
say Q(x) where $x is the dollar value and maybe Q(x) = sqrt(x) so that ...
>I haven't the faintest idea what you're talking about.
Okay, let's consider the SQRT(x) Utility Function and how it relates to possible winnings.
Suppose the expected value of our winnings is $50 or $150.
Is a $150 winning three times more tempting than a $50 winning?
>Yes!
And would you attach the same importance to losing $50 (from $100 to $50) to
winning $50 (from $100 to $150)?
>I'd rather win. I hate to lose. In fact ...
Okay. With this SQRT Utility the utility drops from 10.00 to 7.07 in losing. That's a
drop of 2.93, but in winning the $50 the utility goes from 10.00 to 12.25, an increase of 2.25, so
this utility attaches a greater importance to losing than to winning and that's ...
| Figure 3
|
Figure 4
|
>Right! I hate to lose. In fact ...
So, choosing an appropriate Utility Function reflects the importance that humans place on
money. We're more interested, NOT in the value of money, but in the value of its Utility.
>That's what I'm trying to tell you! I hate to lose. In fact ...
I understand. You're very risk averse, so pick another Utility Function, okay?
|
>That Risk seeking curve? It's not concave down!
That's okay, so long as it doesn't increase too rapidly. We'll talk about that ... later.
In the meantime, we'll assume it's always increasing (meaning "more is always better").
for Part II
|