As long ago as the fourth century B.C., scientists have argued in favor of simplicity. Perhaps the most familiar expression of the importance of simplicity for science is the 14th century formulation known as Occam’s razor, a line of reasoning that holds that the simplest answer is often correct.

The preference for simplicity in science has endured. Indeed “since [Occam’s] time, it has been accepted as a methodological rule in many sciences to try to develop simple models,” researchers write in Simplicity, Inference and Modelling: Keeping it Sophisticatedly Simple.

We were unable to find an operational definition of simplicity in forecasting, so we developed one. My co-authors and I argue in a recent article in the Journal of Business Research that simplicity requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers.

Does violating Occam’s razor impose a cost?

We tested the value of Occam’s razor in forecasting. To do so, we reviewed published comparative studies on forecasting accuracy. Our search found 32 papers that among them included 97 comparisons of forecasts from simple and complex methods. We attempted to contact all living authors of the papers that we cited in order to check that we had correctly interpreted their findings, and made revisions in our paper when that was appropriate. We also asked those authors and other forecasting experts for evidence that would challenge Occam’s razor.

We found that none of the papers provided a balance of evidence that complex forecasting procedures improved the accuracy of forecasts. Moreover, complexity increased forecast errors, by 27 percent on average, in the 25 papers that include quantitative comparisons.

Here is an example of one of the studies, recently published in a special issue on “Simple versus Complex Forecasting” in the Journal of Business Research. In this study, the complex method was multiple regression analysis, probably the most widely used method for forecasting with causal models. Regression is complex in that it uses statistical analysis to determine the optimal weights for predictor variables in a given sample dataset and uses these weights when forecasting new data. A simple benchmark method is to assign equal weights to all variables in a model, a method that has a long history in forecasting and decision-making and can be traced back to at least Benjamin Franklin. Graefe compared the relative performance of regression and equal weights by analyzing out-of-sample forecasts from nine established models for forecasting U.S. presidential elections. He found that the simple equal-weights models reduced error on average by 5 percent. Furthermore, the equal weights method can include all 27 variables used in the nine models in a single model. The resulting index model reduced the error of the typical model by 48 percent.

While I expected that evidence-based forecasting would be like other scientific endeavors and benefit from adhering to the simplicity requirements of Occam’s razor, I am amazed at the consistency of the findings and at the large size of the effect.

Why is complexity popular in forecasting?

First, it seems logical to many people that only complex solutions can possibly work for complex problems. As a consequence, people are reluctant to accept evidence that a simple solution will provide the best forecast for their situation. For example, few people would believe a forecast that there will be no change, even if it is the best forecast that can be made.

Second, complexity may be popular due to incentives. For example, academics get rewarded for publishing in highly ranked journals, which favor complexity. Forecasters are rewarded for the “rain dance” even though the clients do not understand it. Finally, the clients benefit because complex methods enable skillful analysts to produce support for any proposal the client desires.

What are the implications for forecasters and clients?

Guidelines for simple forecasting are available at simple-forecasting.com in the form of the Forecasting Simplicity Questionnaire checklist. Clients who have no knowledge about forecasting methods can use the checklist to assess whether they should reject forecasts because the forecasting procedure was too complex. The checklist is easy and quick to complete.

Fortunately, all of the 22 validated methods for forecasting, as summarized in the Selection Tree for Forecasting Methods, have been simple. So clients can specify that only evidence-based forecasting methods should be used.

To date, violations of Occam’s razor have all been harmful. Do not assume that “things are different now.” Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures.