Demand planners transitioning to new information technology (IT) solutions must avoid software implementations that yield little or no improvement in forecast accuracy, inventory control, or cost savings. While many experts advocate the application of quantitative analysis and statistical models, the shift to this methodology__although logical and largely automated__can be difficult to execute.
One transition point centers around modeling software and whether an improvement in time savings is worth it for what may be less accurate data. This dilemma is an effect of IT tools being one-size-fits-all__and their resulting inability to address specific stockkeeping unit (SKU) conditions and parameters. The plan for statistical models must focus more on using software proficiently and less on the pure accuracy of information.
One company’s progress
I was working with a client on portfolio realignment, which required choosing appropriate SKU-specific statistical models and work with IT solutions that grouped similar SKUs by model. At the start of this project, my client had implemented demand-planning software that used a three-month moving average at the SKU level. The implementation team viewed this as an adequate solution because, by volume, this type of forecasting produced the best model accuracy.
The forecast accuracy had improved, but was still only in the 50 percent range. It was not enhancing inventory position or customer service. Everyone involved realized more modifications were required; however, changing the modeling was consistently inhibiting the project, as IT could only steer correction based on the accuracy of the moving average model.
The first step we took was to determine which SKUs would benefit from a purely statistical approach. We used ABC analysis, and the team also took into consideration which SKUs had the most volume, as well as seasonal and other trends. Product life cycle, customer behavior, and marketing focus also were factored in.
This led to the creation of clear parameters based on behavior and volume. A variable model was defined as a SKU reviewed monthly, regardless of other parameters, with the intent of using the best available SKU model. The client could review the remainder less frequently using other methods, with the aggregate accuracy of each determined monthly.
The next step was to arrange the portfolio of products into those that were mature and had potential for growth, as well as single and multi-customer. Clearly, a SKU with a single customer would benefit from use of consumer information, while multi-customer products needed software-based data. And while mature products gain from straight moving averages, growth and decline SKUs require more sophisticated analysis. Seasonal models were determined from past history and were intuitive. Some effort was invested into growth B and C items, as well.
The models used ranged from three-to-six-month moving averages, trend, seasonal-factored, trend-seasonal, and several exponential smoothing models. SKUs chosen for trend or seasonal models were tested against the software models to ensure they met necessary criteria.
The third step was to add a signal-tracking system to assess error in the models. This, combined with the initial portfolio reorganization, decreased the review workload while yielding better results. It also was a good indicator of when a model needed to be changed or if the initial plan was in error. This was all done at the SKU level.
The signal-tracking system was standard bell curve evaluation, determined by tracking the overall cumulative error and dividing by the mean absolute deviation to compare where the error fit in the normal distribution of error. Those SKUs with error outside normal distribution were reviewed regardless of where they fit in the plan. By using the two review methods simultaneously, the quality of the statistical information was vastly improved and the need for constant review minimized.
Fortunately, the client’s software contained a model simulation tool, which enabled parameters of all available models to be viewed and evaluated, thus making the model selection easier. This can be done in Excel easily. It’s more time consuming, but worth the effort.
My client was now able to concentrate on the SKUs with errors, rather than wait until experiencing poor customer service. Of course, as this was implemented, the original footprint changed. As model accuracy increased, customers realized that collaborating on forecasts led to better customer service and fewer problems. Planners and operations personnel also found it easier to execute plans. As confidence in the information grew, work became easier, and employees provided valuable feedback.
The integration of statistics into forecasting also prepared the client for further software implementations, as well as collaborative efforts with customers and suppliers. Collaborative planning, forecasting, and replenishment also was incorporated into the process.
Meanwhile, the availability of validated statistical information drove improvements in information exchange with financial stakeholders and promoted the use of one set of numbers. With validated volume numbers, procedures could be streamlined. Eventually, the same type of conversation arose between finance professionals and demand planners, which led to better data.
This methodology does not conflict with any other demand processes already in place, but acts to complement them and improve data from all sources. Our experiences using it were so successful that I have applied it to my work in forecasting ever since__with similarly excellent results.
Greg Gorbos, CPIM, is a demand planner at BASF; however, at the time of this implementation, he was an independent consultant. Prior to that, Gorbos worked for ExxonMobil. He may be contacted at firstname.lastname@example.org.