Explaining Inaccuracy


Flyvbjerg, Holm, and Buhl (2002, 2004, 2005) and Flyvbjerg and Cowi (2004) tested technical, psychological, and political-economic explanations for inaccuracy in forecasting. Technical explanations are most common in the literature and they explain inaccuracy in terms of unreliable or outdated data and the use of inappropriate forecasting models (Vanston and Vanston 2004: 33). However, when such explanations are put to empirical test they do not account well for the available data. First, if technical explanations were valid one would expect the distribution of inaccuracies to be normal or near-normal with an average near zero. Actual distributions of inaccuracies are consistently and significantly non-normal with averages that are significantly different from zero. Thus the problem is bias and not inaccuracy as such. Second, if imperfect data and models were main explanations of inaccuracies, one would expect an improvement in accuracy over time, since in a professional setting errors and their sources would be recognized and addressed, for instance through referee processes with scholarly journals and similar critical expert reviews. Undoubtedly, substantial resources have been spent over several decades on improving data and forecasting models. Nevertheless, this has had no effect on the accuracy of forecasts as demonstrated above. This indicates that something other than poor data and models is at play in generating inaccurate forecasts. This finding has been corroborated by interviews with forecasters (Flyvbjerg and Cowi 2004; Flyvbjerg and Lovallo in progress; Wachs 1990).

Psychological and political explanations better account for inaccurate forecasts. Psychological explanations account for inaccuracy in terms of optimism bias, that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience. Political explanations, on the other hand, explain inaccuracy in terms of strategic misrepresentation. Here, when forecasting the outcomes of projects, forecasters and managers deliberately and strategically overestimate benefits and underestimate costs in order to increase the likelihood that it is their projects, and not the competition’s, that gain approval and funding. Strategic misrepresentation can be traced to political and organizational pressures, for instance competition for scarce funds or jockeying for position. Optimism bias and strategic misrepresentation are both deception, but where the latter is intentional, i.e., lying, the first is not, optimism bias is self-deception. Although the two types of explanation are different, the result is the same: inaccurate forecasts and inflated benefit-cost ratios. However, the cures to optimism bias are different from the cures to strategic misrepresentation, as we will see below.

Explanations of inaccuracy in terms of optimism bias have been developed by Kahneman and Tversky (1979a) and Lovallo and Kahneman (2003). Explanations in terms of strategic misrepresentation have been set forth by Wachs (1989,1990) and Flyvbjerg, Holm, and Buhl (2002, 2005). As illustrated schematically in Figure 1, explanations in terms of optimism bias have their relative merit in situations where political and organizational pressures are absent or low, whereas such explanations hold less power in situations where political pressures are high. Conversely, explanations in terms of strategic misrepresentation have their relative merit where political and organizational pressures are high, while they become immaterial when such pressures are not present. Thus, rather than compete, the two types of explanation complement each other: one is strong where the other is weak, and both explanations are necessary to understand the phenomenon at hand–the pervasiveness of inaccuracy in forecasting–and how to curb it.

In what follows a forecasting method called “reference class forecasting” is presented, which bypasses human bias–including optimism bias and strategic misrepresentation–by cutting directly to outcomes. In experimental research carried out by Daniel Kahneman and others, this method has been demonstrated to be more accurate than conventional forecasting methods (Kahneman and Tversky 1979a, 1979 b; Kahneman 1994; Lovallo and Kahneman 2003). First the theoretical and methodological foundations for reference class forecasting are explained, then the first instance of reference class forecasting in project management is presented.


No Responses Yet to “Explaining Inaccuracy”

  1. Leave a Comment

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: