Every so often one runs across debates concerning "What's the best measure of risk".
>And you'd like to enter the debate?
Why not?
Suppose we wanted a numerical measure of risk which was determined by stock returns.
What properties would you like this risk number to have?
How about these properties?
- If the returns were all doubled or tripled then risk should double or triple.
So, if all returns were positive, then multiplying these positive returns by 2 or 3 would increase risk.
- If 10% were added to all returns, then risk should be unchanged.
So, if my returns were always larger than yours by some positive constant (like 10%), our risk should be the same.
- Our numerical measure should assign a larger risk to these returns
than to these
If you'd like risk to have these properties, then I'd recommend:
Risk = Standard Deviation |
P.S. In the first chart, there is no uncertainty: the returns are 5%, 5%, 12%, repeated.
So if you wanted your definition to measure "uncertainty", you may not want Risk = Standard Deviation .
|