Want to reduce churn?  Don’t model churn.

At least, don’t model churn as your only step.

A very common approach to churn reduction is to build predictive models, which are intended to produce a score for the likelihood a customer will churn within a given time period.  The approach is so common there are libraries of code available online for this purpose.  ChatGPT can provide Python code for the task.

But is it the right thing to do? 

We have been helping marketers reduce churn for over 20 years.  In our experience, predicting churn is a good first step, but you cannot stop there.

You actually want to predict not just churn risk, but the probability you can save the relationship, given your history of communication and offers used for this purpose. 

The highest churn risk scores may be pointing you to customers that are more than halfway out the door anyway.  Not only will your communications and offers not dissuade them…it’s too late…it might actually make things worse.  There is a “wake up” effect we have seen where at-risk customers realize their contract is up for renewal, for example, and it’s time to shop around, not jump at your first offer.

Think recovery, not just risk.

Next, think about the value of the relationship.  What is the predicted lifetime value in the coming years, after your planned intervention?  How long will they stay with you, what revenues would you predict over that period, and at what profit margins?  Knowing this number gives you the starting point for a key question: what are you prepared to spend to save this relationship? 

Extrapolate that question across all risk-save scores and you get a sense of the envelope of budget you need to reduce churn, as well as a sense of how much of a dent you can make in the churn rate.

Another way to think about churn reduction is as a problem in optimizing individual contacts….what offer, message, channel and timing.  This is where a product like our 1:1 Optimization system can help direct traffic and make both planning and execution easier.

Finally, you probably don’t have a churn problem (what??).  You probably have several.  Customers don’t all leave for the same reason, and a tactic designed to keep them for one irritant may be completely irrelevant, even counterproductive for another. 

Consider an example from wireless services.  What if a customer is irritated about billing problems, not resolved with your customer service team, repeatedly?  Consider another customer who is irritated about dropped calls.  What can you do to counter these two problems?  Would the same tactic work for both?  Would a tactic designed to work well for one be as effective for the other?

You should think about a modelling solution that provides risk scores, save scores, and what we call Action Codes.  Action codes combine an evaluation of why a customer is at risk with an action you choose to suppress that risk.  All three scores can be funnelled to the customer-facing systems you will use to deliver messages, whether that is an all purpose CRM system, a channel focused system such as for email, or other.

Put it all together and you have the answers you need to plan effective churn programs and translate strategies into individual action plans.

Best of luck and if we can help, don’t hesitate to contact us.

Extending MMM to cut out-of-stocks: A retail success story

This post appeared originally on the Future Factory blog, here

Sales figures often fail to accurately reflect the true demand observed by retailers. While sales data provides insights into consumer purchasing patterns, it does not capture the full picture of consumer behaviour. There are several reasons for this discrepancy. Firstly, sales figures only consider completed transactions and do not account for potential sales lost due to stockouts, pricing issues, or poor customer service. These missed opportunities significantly impact retailers’ understanding of the actual demand for their products.

Furthermore, external factors unrelated to consumer demand can influence sales figures. Seasonal fluctuations, promotional activities, and sudden market trends can artificially inflate or deflate sales numbers, leading to an inaccurate representation of true demand. For example, a retailer may experience a sales surge during a specific holiday period, but it may not indicate a sustained increase in demand throughout the year. Relying solely on sales figures can result in misinterpretation and a misalignment with customers’ genuine needs and preferences.

When retailers face uncertain demand, they can utilize observed sales to update their demand estimates. However, this learning process is constrained by the inventory they carry. When demand exceeds inventory, such as during an out-of-stock event, retailers generally cannot directly observe the actual demand. Accurate demand forecasting becomes crucial not only to reduce out-of-stocks (OOS) but also to minimize overstocking (OS). This use case focuses on a food retailer operating through multiple franchisees, aiming to provide customers with restaurant-quality foods, great value, and unique service. Their product lineup showcases top-quality, exclusive, flash-frozen items that are not available in other food retailers.

To estimate out-of-stocks (OOS), it is necessary to know the inventory levels at the store and the demand experienced by the store. In this case, the supplied data includes end-of-day inventory on hand, as reported by the stores to the head office. However, it is important to note that some records may be blank or contain negative values, which were eliminated from the analysis. Figure 1 provides an overview of the specific data fields used in the predictive models for reference. Eliminating these from the analysis cut approximately 1% of the records. In Figure 1, we present the data fields are used in the predictive models:

Figure 1. Data Fields used in the Predictive Model

The newly developed daily model has shown impressive accuracy in predicting sales, with an average percentage error of just 2% at the day and store level. This level of precision is a significant improvement compared to the forecasts generated by the retailer itself. To provide context and demonstrate the superiority of the new model, Figure 2 presents a comparative analysis of the accuracy between the two forecasting approaches. Both forecasts were generated six weeks prior to the commencement of the promotion period and flyer distribution.

By evaluating the performance of the new model against the retailer’s forecasts, it becomes evident that the new model offers a more reliable and robust prediction of sales. The new model was more accurate in every Price Look Up (PLU), a KPI equivalent to SKU, in the test; on average with a WMAPE of 7.5% vs the retailer’s mean absolute value weighted by store sales of 32%. The low average percentage error signifies that the new model is consistently close to the actual sales figures, enabling the retailer to make more informed decisions regarding inventory management, resource allocation, and overall business strategy. This level of accuracy is particularly valuable when preparing for promotional activities and flyer drops, as it allows the retailer to anticipate customer demand and ensure optimal stock availability to meet the expected surge in sales.

The utilization of the new model, with its significantly improved accuracy, empowers the retailer to proactively respond to market dynamics and enhance operational efficiency. By having access to reliable sales forecasts well in advance, the retailer can make timely adjustments to their supply chain, optimize pricing strategies, and allocate resources effectively to maximize profitability during the promotion period. The superior accuracy of the new model instills confidence in the retailer’s decision-making process, ultimately leading to better customer satisfaction, increased sales, and a stronger competitive position in the market.

Figure 2. Forecast Accuracy Comparison

The next step is to estimate the OOS losses. For our analysis, only the days in which the PLU is on promotion, with a discount greater than 0 are used. As a conservative approach, a predictive model was used to first group store-days into deciles based on their expected sales. As far as the model is concerned, sales for store-days within a decile should be virtually the same. Then, the stores are divided into two groups: a high inventory group, which has inventory on hand at much higher levels than the predicted sales, and a second group in which inventory on hand is much closer to, or even less than predicted sales. Then the difference is taken between the actual, observed sales for both groups as the effect of having higher inventory levels on hand. In other words, where stores order much more inventory than they are likely to need, we expect sales to be higher than if they have little excess inventory on hand or if they run out of stock.

In Table 1, we show a set of store-days for PLU 30, Italian Meatballs. The model predicts sales for these stores will be very similar 9.7 and 9.6 units per day. However, the Hi in Table 1, we show a set of store-days for PLU 30, Italian Meatballs. The model predicts sales for these stores will be very similar 9.7 and 9.6 units per day. However, the Hi Inventory group has, on average, 7.7 units on hand before the start of the current sales day (i.e. yesterday night) while the Low Inventory group has only 1.5 units.

Table 1

In theory, the only difference between the two store-day groups is inventory. In all other respects, they should generate the same sales, but they don’t. The Low Inventory group generates 2.3 units per day less. The Hi Inventory group generates 10.1 units, whereas the Low Inventory group generates only 7.8 units. This difference is aggregated across all store-days and multiplied by the sales price of the day to arrive at an estimate of dollar loss. When we divide this dollar loss by the annual actual sales, we arrive at the percentage figures for 5 specific PLUs presented in Table 2 below:

Table 2

Table 2 provides valuable insights that can help estimate the financial impact of out-of-stock (OOS) losses for retailers. It is important to note that reducing OOS not only increases sales but also brings higher percentage benefits to franchisees compared to the retailer. This is due to the distribution of margins, where franchisees retain the majority of sales profits. Therefore, mitigating OOS not only reduces losses but also lowers carrying costs for franchisees, which becomes increasingly significant with rising interest rates.

For instance, based on the figures provided by the retailer, their annual revenue is approximately $400 million. Using the average OOS value from Table 2, we can estimate the annual revenue lost due to stockouts to be around $19.2 million. Considering an overall 30% margin, with the corporate share at 3%, the estimated losses due to OOS amount to $5.8 million in margin for the entire system, with the retailer experiencing a loss of $576,000. However, the more accurate model demonstrates the potential to avoid most of these losses through improved inventory management.

In summary, accurately forecasting consumer demand is of paramount importance for retailers, particularly when dealing with the challenge of out-of-stock (OOS) sales. While sales figures provide valuable insights, they often fail to capture the true demand observed by retailers due to various factors, such as missed sales opportunities and external influences. By utilizing advanced analytics and predictive models, retailers can gain a deeper understanding of customer behavior, integrate marketing efforts with inventory management, and make informed decisions to reduce OOS occurrences.

The financial impact of OOS losses can be substantial, with revenue losses and margin erosion affecting both the retailer and franchisees. Estimating the magnitude of these losses helps highlight the urgency of addressing OOS through effective inventory management strategies. By leveraging accurate demand forecasting models, retailers can optimize their supply chain operations, improve inventory replenishment processes, and ultimately reduce OOS occurrences.

Furthermore, accurate demand forecasting not only minimizes revenue losses but also leads to improved customer satisfaction and reduced overstock. By aligning inventory levels with actual demand, retailers can ensure product availability, avoid missed sales opportunities, and deliver enhanced customer experiences. Moreover, as interest rates rise, the impact of reducing carrying costs and avoiding revenue losses due to OOS becomes increasingly significant for both the retailer and franchisees.

In conclusion, accurately forecasting demand and effectively managing inventory are vital for retailers to mitigate the financial and operational impact of out-of-stock sales. By embracing advanced analytics and developing robust predictive models, retailers can align their marketing efforts with inventory management, reduce revenue losses, optimize profitability, and enhance customer satisfaction. Emphasizing the importance of accurate demand forecasting enables retailers to proactively respond to market dynamics, capitalize on sales opportunities, and drive sustainable growth in today’s competitive retail landscape.

About the authors

Dr. Murat Kristal is an Associate Professor of Operations Management at the Schulich School of Business at York University in Toronto, Canada. He received his Ph.D. from the University of North Carolina at Chapel Hill in 2005. Dr. Kristal is the founding director of Master of Business Analytics, and Master of Management in Artificial Intelligence programs at Schulich. Currently he serves as the director of MBA in Technology Leadership Program at Schulich. In 2016, he was named one of the Top 40 Professors under 40. In 2020, he was the recipient of the Award of Excellence from Minister of Colleges and Universities, Ontario, Canada. He has published in top Operations Management Journals throughout his career, and he works with various companies in North America and Europe to help them achieve their analytics, AI, and digital transformation goals.

David Beaton is the co-founder of both Navigation ME, a company that helps marketing clients become more effective through the development of customized advanced analytics (predictive models and optimization algorithms), and Crater Lake & Company, a company dedicated to helping marketing clients make sense of it all by integrating advanced analytics, innovative research tools, and creative and media planning disciplines. Navigation ME has a strong track record of lifting business performance across B2B and B2C sectors and over a wide variety of brands and industries. The team has also developed successful innovations in the application of marketing analytics to solve problems in measurement, attribution and optimization. Navigation ME clients have used the work to improve results in customer acquisition, upsell and cross-sell, and retention. The company has developed a suite of tools that measures the effect of complex, dynamic omnichannel marketing campaigns and optimizes budget allocations for better results. This approach is both holistic and rigorously validated, making it a reliable choice for improving business performance. David completed his MBA at the Rotman School of Management at the University of Toronto.

There’s a better way to evaluate campaigns (including digital) than traditional MMM

This post appeared originally on the Crater Lake blog, here

Marketers have yearned for the data to help them decide how much to spend, and where for many years, data that would give them answers to questions such as:

How much should I spend on advertising?

How do I best allocate this budget to channels, content, markets and over time? What is my real ROI and lift in sales volumes?

More than half a century ago, statistical modeling methods began to be applied to answer questions like these. Over the years, as marketing began to broaden its remit beyond media advertising, these methods began to incorporate other elements of the marketing mix. As a result, the term MMM evolved from Media Mix Modeling to Marketing Mix Modeling. Today, MMM is often, and fairly, criticized for leaving marketers grappling for answers that the techniques themselves are just not suited to deliver.

  • How does MMM help with the details of planning and buying online media?
  • Can MMM measure the long term effect of advertising?
  • Can MMM help us understand the effects of creative?
  • Can MMM incorporate the impact of customer experience?

In short, traditional MMM is too limited in its capabilities to be useful for today’s marketer. To try to tackle these new requirements, Crater Lake has chosen to use a wheel as a metaphor. Each spoke on the wheel represents a channel. In the hub, all channels come together.

If we want to build a model that measures the impact of all channels we are operating in the hub. In the hub, we need to describe each channel using variables that ALL channels have in common.

But this inevitably means some of the data available to an analyst in a spoke is not used.

In the Crater Lake approach, our omnichannel model (hub) uses a unit of analysis that is a combination of geography (small areas) and time (usually day).

In a spoke, the unit of analysis may be different. Consider an email targeting and measurement model; that would use a unit of analysis that is an email address combined with day.

This means that while the “hub” model can certainly be used to measure cross-channel effects such as total lift, and it can also be used to allocate budget between channels, it’s application within a channel (in a spoke) is limited to the data available.

In our view, we compensate by building complementary models for each major spoke, and then solve for optimization across the entire system. When we want to integrate the hub learning with a spoke, we are pushing hub data back out into a spoke model. This means a spoke model (representing a single channel) can incorporate data about the presence and effect of other channels.

Our aim is threefold:

  1. to learn what we can from the totality of the effort.
  2. to push that learning into the data we have available on each individual channel thus allowing us to improve how we optimize effort within a channel.
  3. to optimize, across all channels.

There are many individual hurdles we need to overcome, given the huge quantity of data available on some channels and the comparative paucity out there on others. We believe that there really isn’t any point complaining about the lack of data, or its inconsistency. Over time the situation will improve but for marketers the time is now, and the need is immediate.

We work on the basis that 90% right and on time beats 100% right but too late every time.

We work with what we’ve got, and what we can get.

Making Sense of It All: The Clear Performance Narrative

This post appeared originally on the Crater Lake website, here

How many times has the boardroom or hallway conversation at your company contained a variation on the phrase “we are just not on the same page”. It could be referring to a difference of opinion or insight between marketing teams, or between marketing and the C suite. Or it could be a deep debate about strategy or overcoming challenges. The problem is, as one exec puts it, most marketers now enjoy (endure?) a glut of information. What is missing is a narrative that pulls the pieces together and makes sense of it all.

That’s why we set out to create the Clear Performance Narrative, one of the key deliverables we provide to Crater Lake & Co clients.

What we quickly discovered is that, occasionally, the problem with creating a Clear Narrative is that some pieces of crucial data/insight are missing, but you only discover that as you knit the story together. The story needs more signal, and less noise.

To qualify as “Clear” the narrative must not only answer why we are getting the performance we are, but also, what we can do next to improve the outlook for the business.

We must sort out cause and effect, and quantify the impact and economic value of all marketing activities.

We must be able to validate our analytics, to provide evidence to all marketing disciplines and the C suite that the narrative we build around our insights is reliable.

We must be able to quantify incremental lift in a holistic way, measuring advertising alongside other drivers of the business, whether those drivers are controlled by marketing or not.

We must be able to balance short term and long term, which also means balancing risk and reward. We have the tools (or some of us have), but applying them is challenging for leadership without context. As one wag put it “facts tell, stories sell”.

A clear performance narrative is one readable by any marketing discipline; indeed, every discipline should see themselves in it, and find and endorse their place in the story.

We must be able to build a clear picture of our customers and prospects in ways that are not just analytical, but empathetic. The narrative is not just dry numbers but human touch and feel.

Think of the narrative as the story of your brand in motion. When done right, we can clearly see where we have been, where we need to go, and how to get there. The very best stories invite the listener and reader to dig in, get invested in the outcome and root for the heroes. Only in this narrative, the heroes are not mythological beings, but our team members, their ideas, and their work.

The story is one of progress towards a goal, one that all marketers endorse and believe in. There are obstacles to be overcome, opposition to be defeated or outmaneuvered, challenges emerging that were previously unseen. How will the story turn out? It very much depends on the characters that write it.

So why the emphasis on story, or narrative? Aren’t we just talking about yet another 70 page powerpoint deck?

No, because this is NOT another 70 page powerpoint deck. We have enough of those.

As Jonathan Haidt (social psychologist and Professor at NYU’s Stern School of Business) put it “The human mind is a story processor, not a logic processor.”

Not a surprise to advertisers, who of course do this every day in their creative work. So now we turn the techniques of story-telling to the task of synthesizing the work of advanced analytics, of research, and of marketing and business leadership.

Making sense of it all: it should be an exciting journey.

-The Partners of Crater Lake & Company