Over the years, forecasting demand has become paradoxically harder, not easier. Despite the rise of advanced analytics and AI, many supply chain professionals are grappling with an uncomfortable reality: a growing portion of products have become virtually unforecastable.
A closer look at product portfolios across industries tells the story. Long gone are the days of a simple, stable SKU base. Today, the average supply chain is characterized by sprawling assortments, expanding into more geographies, sales channels, and customer segments. The result? Demand has become increasingly fragmented.
The forecastability of products can be measured. Although far from perfect, a commonly used metric is the coefficient of variation (COV), which compares the standard deviation of demand to the mean. A low COV indicates stable, predictable demand, while a high COV suggests volatility.
Coefficient of Variation (COV) | Forecastability |
0.0 – 0.3 | High |
0.3 – 0.7 | Somewhat forecastable |
>0.7 | Low / Unforecastable |
With rising assortment complexity, more SKUs now fall into the high-COV zones. It’s not necessarily that customer demand is less predictable, but rather that it’s spread too thin across too many nodes: SKUs, customers, locations, and channels.
Several structural shifts are fueling this trend:
This leads to a scenario where many SKUs have low volume and erratic demand patterns. Trying to forecast them individually often amounts to guesswork.
Facing this challenge, companies naturally reach for more advanced models. While models are getting more and more advanced, they can only go so far when the data is inherently noisy.
For products with a high noise-to-signal ratio, even the most sophisticated model will struggle. You can't forecast chaos.
Rather than applying more complex algorithms, greater gains often lie in rethinking the forecasting process itself. And one of the most important design choices is the level of aggregation at which forecasts are created.
In many organizations, the default is still SKU-level forecasting, or worse: SKU-customer forecasting. While intuitive (this is where we make our decisions), this level is often where demand is weakest and noisiest. Forecasting here yields limited accuracy and creates unnecessary firefighting downstream.
To improve forecast quality, companies can explore forecasting at higher levels of aggregation, followed by proration down to the SKU level.
Aggregating demand across SKUs, customers, or regions boosts the signal by smoothing out volatility. It’s like looking at the tide rather than individual waves, you get a clearer view of where things are heading.
However, there's a trade-off.
Hierarchical forecasting is often perceived as a balancing act between capturing detailed individual behaviors at a granular level and effectively managing noise through aggregation. Forecasts at the highest levels of aggregation tend to be unreliable, primarily due to the limited availability of information on individual behavior. Conversely, forecasts at the lowest aggregation levels are often considered inaccurate, as disaggregated time series data may fail to reflect the commonality across individual demand patterns.
Once you forecast at a higher level, you must distribute that forecast downward. The simplest approach? Use historical ratios.
Example:
But beware: when the mix is dynamic due to seasonality, promotions, or product introductions, simple ratios can mislead.
This is where more advanced hierarchical forecasting techniques come into play.
This method updates SKU-level allocation ratios regularly based on short-term trends or event calendars. It’s particularly useful for categories with fast-moving or seasonal assortments, such as fashion, food, or promotional items.
Example:
Instead of using last year’s Easter mix for chocolate sales, dynamic ratio modeling adjusts based on recent sell-in patterns across SKUs.
2. Bayesian Shrinkage
A statistical approach that “borrows strength” from higher levels in the hierarchy. Instead of treating each SKU as an island, it allows forecasts to be influenced by group-level behavior. The forecast is then partly determined by the data at group level and partly by the data at SKU level.
Example:
A newly launched SKU with only two months of history is forecasted based on both its early signals and the behavior of similar SKUs in its category.
The temptation to forecast everything at the SKU level is strong, but often misguided. By recognizing that data quality varies across the hierarchy, supply chain teams can design smarter forecasting processes that amplify signal and reduce noise. In short, hierarchal forecasting applied in the right situation can help you to improve the forecast accuracy by forecasting less, not more, leading to a forecasting process that has less complexity and less workload.
Hierarchal forecasting is just one of many areas that are crucial in the design of the forecasting process. Do you feel the design of your forecasting process needs to be updated? Reach out to me or my colleagues at Involvation.