The Absolute Best Way to Measure Forecast Accuracy

 

What makes a good forecast?  Of course, a good forecast is an accurate forecast.  Today, I’m going to talk about the absolute best metric to use to measure forecast accuracy.

 

Let’s start with a sample forecast.  The following table represents the forecast and actuals for customer traffic at a small-box, specialty retail store (You could also imagine this representing the foot traffic in a department inside of a larger store, too.).  Is this a good or a bad forecast?

 

 

 

Sun

 

Mon

 

Tue

 

Wed

 

Thu

 

Fri

 

Sat

 

Total

 

Forecast

 

81

 

54

 

61

 

68

 

92

 

105

 

121

 

582

 

Actual

 

78

 

62

 

64

 

72

 

84

 

124

 

98

 

582

 


Certainly, the weekly forecast is good.  After all, the forecasts says that 582 customer would visit the store, and by the end of the week, 582 customers did visit the store.

 

The problems are the daily forecasts.  There are some big swings, particularly towards the end of the week, that cause labor to be misaligned with demand.  Since we’re trying to align labor to demand, understanding these swings – these forecast errors – is important.

 

It’s easy to look at this forecast and spot the problems.  However, it’s hard to do this more more than a few stores for more than a few weeks.

 

To overcome that challenge, you’ll want use a metric to summarize the accuracy of forecast.  This not only allows you to look at many data points.  It also allows you to compare forecasts.  This is useful when you want to determine if one forecasting method is better than another, if forecast the workforce management system produced better than than the one provided by finance, or if forecasts getting more or less accurate over time.

 

I frequently see retailers use a simple calculation to measure forecast accuracy.  It’s formally referred to as “Mean Percentage Error”, or MPE but most people know it by its formal.  It is calculated as follows:

 

MPE = ((Actual – Forecast) / Actual) x 100

 

Applying this calculation to Sunday in our table above, we can quickly find the error for that day is –3.9 percent. 

 

MPE = ((79 – 81) / 79) x 100 = –3.9

 

This means that the actual results were 3.9 percent less than what was forecasted. 

 

The benefits of MPE is that it is easy to calculate and the results are easily understood.  Statisticians and math-heads like to throw around complex ways of calculating forecast accuracy which are intimidating by name and produce results which are not intuitively understood (Root Mean Square Error, anyone?).

 

The problem is that when you start to summarize MPE for multiple forecasts, the aggregate value doesn’t represent the error rate of the individual MPEs. Consider the following table:

 

 

 

Sun

 

Mon

 

Tue

 

Wed

 

Thu

 

Fri

 

Sat

 

Total

 

Forecast

 

81

 

54

 

61

 

68

 

92

 

105

 

121

 

582

 

Actual

 

78

 

62

 

64

 

72

 

84

 

124

 

98

 

582

 

+/-

 

-3

 

8

 

3

 

4

 

-8

 

19

 

-23

 

Avg

 

MPE

 

-3.9

 

12.9

 

4.7

 

5.6

 

-9.5

 

15.3

 

-23.5

 

-0.2

 


How accurate are all forecasts for the week?  By averaging each day’s forecast, I get –0.2 percent. Hmmm…

 

Does -0.2 percent accurately represent last week’s error rate?  No, absolutely not.  The most accurate forecast was on Sunday at –3.9 percent while the worse forecast was on Saturday at –23.5 percent! 

 

The problem is that the negative and positive values cancel each other out when averaged.  Fortunately, there is an easy way to fix the problem by using “Mean Absolute Percentage Error”, or MAPE, which is calculated as:

 

MAPE = (Absolute Value(Actual – Forecast) / Actual) x 100

 

MAPE is remarkably similar to MPE with one big exception.  The exception is that you take the absolute value of the difference between the actual and forecast.  Let’s see how the calculation works for Sunday:

 

MAPE = (Absolute Value(79 – 81) / 79) x 100 = 3.9

 

As you can see, the absolute value removes the negative value.  This allows us to summarize multiple values and get a better sense of what the true error rate of our forecasts is:

 

 

 

Sun

 

Mon

 

Tue

 

Wed

 

Thu

 

Fri

 

Sat

 

Total

 

Forecast

 

81

 

54

 

61

 

68

 

92

 

105

 

121

 

582

 

Actual

 

78

 

62

 

64

 

72

 

84

 

124

 

98

 

582

 

+/-

 

-3

 

8

 

3

 

4

 

-8

 

19

 

-23

 

Avg

 

MPE

 

-3.9

 

12.9

 

4.7

 

5.6

 

-9.5

 

15.3

 

-23.5

 

-0.2

 

MAPE

 

3.9

 

12.9

 

4.7

 

5.6

 

9.5

 

15.3

 

23.5

 

10.8

 


As you can see the aggregate value of MAPE is 10.8.  This is a much more representative measure of our overall forecast quality than the –0.2 percent that we got from MPE.

 

MAPE delivers the same benefits as MPE (easy to calculate, easy to understand) plus you get a better representation of the true forecast error.

 

Some argue that by eliminating the negative value from the daily forecast, we lose sight of whether we’re over or under forecasting.  The question is: does it really matter? 

 

When it comes to labor forecasting, being above actuals means that you’re using too much labor and wasting payroll.  Being below actuals means that you’re missing opportunity and adversely effecting customer experience.

 

In my next post in this series, I’ll give you three rules for measuring forecast accuracy.  Then, we’ll start talking at how to improve forecast accuracy.

 

This post is part of the Axsium Retail Forecasting Playbook, a series of articles designed to give retailers insight and techniques into forecasting as it relates to the weekly labor scheduling process.  For the introduction to the series and other posts in the series, please click here.