By Matt McCoy
Learn more about Matt on NerdWallet’s Ask an Advisor
As part of the standard package of modern portfolio theory (MPT) statistics, standard deviation is the go-to measure of total risk within the investment community. While widely used and accepted by investors, my experience is that it is still a misunderstood and often misused metric. Let’s first walk through what exactly standard deviation measures.
Standard deviation measures the amount of variance or dispersion around the average return, over a stated period of time. One of the underlying assumptions is that returns are normally distributed about the mean (just remember that famous bell curve from your statistics class). A smaller standard deviation means that returns are tightly dispersed around the average – in other words, most of the realized returns were relatively close to the average return.
Consider the phrase “variance or dispersion around the average” for a moment. The variance or dispersion that is measured is both above and below the average. Think about that for a minute. Our widely accepted measure of total risk not only incorporates positive outcomes, it also does not delineate between positive and negative outcomes – only that the return was different from the average. I don’t know about you, but I have yet to see a definition of risk that includes positive outcomes.
Another challenge with using standard deviation to measure total risk is the use of the average return as a reference point. I have multiple issues with using averages for measuring anything (thanks to Sam L. Savage’s “The Flaw of Averages” for confirming this for me) however I will limit this discussion to the implications on standard deviation. Assuming that an investment’s average return is positive, a portion of the standard deviation below the average is still a positive number. While the portion that is below the mean but still positive could be labeled as underperformance, I would not consider that risk in absolute terms. Granted, if you are using average returns within your planning assumptions, failure to achieve the average could pose a risk to reaching your financial goals. But I still argue that a positive return is not a risk by itself.
The use of historical (or realized) data also poses a challenge. What if you use the past five years of data to measure volatility and current economic conditions are significantly different? Is this a fair comparison for estimating future volatility? Likely not, but investors consistently use historical volatility in an attempt to estimate future volatility. So does this mean that standard deviation is a useless measure and should never be used for any purpose? Absolutely not. We just need to understand the limitations and use it correctly. As human beings, most of us avoid risk – the probability of losing something of value – as much as possible but as investors, volatility can be our friend (just ask anyone who trades options).
Every financial and statistical measure has underlying assumptions that come with it. Understanding these assumptions is the key to understanding the limitations of each measure. Many of these measures are not meant to be used on a standalone basis; they require the use of other measures in order to see the whole picture.
So who’s to blame for the confusion? The misunderstanding and misuse of financial and statistical measures can be blamed on each of us who hold ourselves out to be financial professionals. While industry insiders (hopefully) understand the difference between volatility and risk, these terms continue to be used interchangeably. The industry needs to do a better job of communicating that risk is not a single number. We cannot reliably incorporate general market risk, credit risk, geopolitical risk, liquidity risk, inflation risk and industry risk into one meaningful number. The next time you hear standard deviation referencing total risk, just remember: volatility is not necessarily risk.