It’s incremental lift: the sales your company generates every year due to all forms of marketing communications.
Or put another way, the sales your company would lose if you stopped marcom.
Based on Navigation’s experience in measuring this, for some companies, that number could be over 50% of sales.
Think about the importance to your company of marketing in that context.
But to be credible, your source must be a model that meets or exceeds best practices; it must be:
Developed around measures the C suite finds meaningful such as sales, and NOT around vanity metrics that have no clear relationship to business success. The C suite must see clearly how this model supports their goals.
Includes data on all, or at least the most important, factors that shape business outcomes. Certainly paid media, but owned and earned as well. And factors that may shape outcomes even if not under a CMO’s control such as weather or economic changes.
Is accurate; a proven ability to explain over 90% of variation in outcome measures over the most recent 3 year period.
Is actionable; you can then demonstrate what to do to improve outcomes, or what would happen if a bad decision were taken
Tested in market and shown to produce recommendations that lead to demonstrably better results.
Imagine your relationship with the C suite with this knowledge in place:
Imagine a C suite that respects the practice of marketing instead of disparaging it or treating it as an expense taken reluctantly.
Imagine a C suite that is looking to help expand the impact of marketing, instead of reflexively cutting marketing budgets in hard times.
Imagine the effect such knowledge has on your own team, and the pride taken by knowing how important their work is to the health of their company.
Imagine the confidence that comes with making a clear contribution and being appreciated for it.
Imagine how much easier budget approvals would be with the full team aligned behind this metric.
Imagine being a valued C suite team member and a driver of growth.
Imagine measuring incremental sales properly, socializing that finding to the C suite, and then updating it regularly.
The marketing director approached me after I had presented our work in multi-channel ad effect measurement at a large telco. I had not met him before and he worked for another business unit to the one that had hired us.
He was almost embarrassed to ask the questions that were clearly bothering him. We sat down over coffee and he told me his story.
He was responsible for direct and digital marketing to the telco’s existing customers, selling a service to add to the one(s) they already had. His channels were email and addressed direct mail. Since his budget was limited, he asked the in-house modelling group to help him target his campaign, asking who should get both an email and a DM? What he showed me was a report of new service activation rates by model decile, after the last campaign had finished.
Before we consider what he and I were looking at, let’s start with what a good targeting model should deliver in a case like this. Such a model should use data about customers: description and their history of purchases, cancellations, billing and customer service interactions. It should use the communication history for each individual customer; what channel was used to contact him/her, when, for what product, what offer and what messaging and any interactions that had resulted from campaigns used in the past. In the case shown below, this particular model also uses neighbourhood demographics.
Consider the following post-campaign report, from a telco client of ours, also for a customer cross-sell campaign. Predicted scores are the probability of activation (the customer orders and installs the product) for the campaign period (here, 4 weeks after first contact). These scores were generated about 12 weeks before the campaign was launched; enough time to prepare the mailers. Actual scores are recorded for the 4 week campaign window, against both non-communicated and non-targeted control groups (our take on control group design for CRM will be covered in a future blog post).
There are three features of this report we should pay attention to: (deciles represent scores for each customer, averaged)
Each decile shows predicted scores that are lower than the decile before; think of this as a “ski slope” shape.
There is excellent differentiation between predicted scores for the best decile vs the average (here, 3.1 times) or vs the lowest decile (here, 15.3 times). The more differentiation, the more opportunity we have to create business value.
There is a close correspondence between predicted and actual by decile. The model does not over- or under-predict for any given decile. This level of accuracy allows us to invest with confidence.
To the predictions we can now add the cost of contact (here, DM + EM = $3.00) and value of activation (here, $75.00), which gives this picture, for a base of 5 million customers (outcomes assume we contact all customers in each decile):
For planning purposes, we would select only the top 7 deciles in order to maximize profit.
(For a future blog post: how we can use this as a starting point to create even more value)
Now, let’s consider the picture our director of marketing was looking at. I mocked up the report, here, from memory.
As you can see, the best decile for activation rate is NOT the first…it is the fourth. The first, which should be the best, is the 6th worst. The 7th has good performance…who would have thought? In short, instead of seeing the “ski slope” shape we see with a good model, this report looks like a hockey player’s teeth after the playoffs.
You cannot make good targeting decisions using this model. Why was it performing this way?
I went next to see a senior member of the modelling team and we quickly agreed on the cause: the model had no communication variables at all. Instead of being able to ask the question, “what would be the predicted activation rate if I contacted a given customer, with a given channel”?, the model the marketing director was given provided scores that had no relationship to the outcomes he wanted.
Why was it designed this way? Because the modelling group felt that their scores reflected a prioritization of customers for whom a sale of this type would be most likely to improve retention and generate long term cash flows. While this is useful as a way to study customer behaviour, it is useless when it comes to designing a marcom campaign, which was their brief. The modelling group felt that the director of marketing had to “learn” how to sell to the top deciles. How? That question went unanswered.
I often say in the analytics business there is a substantial knowledge gap between the people building models and the decision-makers using them. This gap should be closed with clear, transparent communication. In this case, that didn’t happen; the modelling group pushed a solution that fit their preferences, even knowing it was not what was asked for.
In cases like these, such models are more likely to get their marcom users fired than promoted. And that is a failure for all involved.
Robert Jones, like many of us, trusted his car’s sat-nav system to help guide him to his destination. But on one unfortunate day, it almost cost him his life.
On that day, Mr Jones followed sat-nav instructions so closely he dismissed the evidence of his own eyes. The narrow, steep path he was driving on was clearly unsuitable for a car, but the system told him it was a road. He almost plunged his car off a 100 ft cliff as a result[1].
That same year, the US National Highway Safety Administration estimated that GPS systems caused over 200,000 accidents every year in the United States[2].
Can we really trust the data and models that we use to navigate our lives? How do we know when we can, and cannot?
Marketers live in a complex world, and seek to create change in that world. In doing so, they grapple with problems of targeting, segmentation, evaluation, optimization and diagnostics. Increasingly, they turn to sophisticated analytical technology to disentangle data in search of reliable cause and effect quantification.
Are those models leading marketers to good outcomes, reliably, consistently, or leading them off the proverbial cliff?
There is an imbalance of knowledge between those building advanced analytics systems and their marketing decision-maker users. The former understands the math and the data, but may not fully grasp the risks. The latter assumes the risks of using models without, in many cases, having the quantitative skills to assess those risks themselves. Some don’t even know they are taking on risk, or how much. How can marketers and advertisers protect themselves from the downsides some forms of advanced analytics expose them to?
One should not need to be a Ph.D. in math or stats to use an MMM or CRM model as a client. But one should be able to do a basic check of the work, enough to spot weaknesses and avoidable risks.
There are three I recommend to my clients:
Data: what data is used to build the model
Accuracy: how accurate is the model
Actionability: how easily and effectively can the model’s prescriptive analytics be implemented?
Let’s do a high level review of each, in the context of a Marketing Mix Model use case.
Data: what data is used to build the model
Before building an MMM, you should satisfy yourself that the right data goes into its construction, not just the easy data, that is, data easily available. Ask yourself what factors causes changes in the outcomes you are interested in. For an MMM, these would certainly include paid media, but you should also include owned and earned media as well. And you should include data on factors that may be out of a marketer’s control but that affect outcomes strongly such as weather or local economic conditions. Take a holistic approach to understanding what drives outcomes. Not all factors tested in the model may be significant, but taken together they are more likely to produce a model you can trust than if major drivers are ignored.
Today, many advertisers rely on digital attribution systems that break this rule by focusing on digital media data only. Those results should be treated with great caution.
Accuracy: how accurate is the model
If a model can reproduce the time series outcome data on your business with a high degree of accuracy, we believe it is fairly quantifying those cause-and-effect relationships that shape those outcomes. But how accurate does it need to be?
Consider this case. This brand does $1.2 billion in sales per year; you see three years here in time series. As you can see, the predicted sales numbers, from our model, almost matches the actual. We explain 95% of the variation in sales we see over this time….not just the broad trend of the business but the monthly nuance.
In our work, we advocate accuracy rates at or over 90% over the calibration period, validated by an out-of-period test of fit.
The higher the accuracy, the lower the risk of mis-attribution and of making bad forward-planning decisions. Cause-and-effect are properly quantified and effects dis-entangled. Marketing and advertising decisions can be made with greater confidence and good outcomes follow.
Actionability: how easily and effectively can the model’s prescriptive analytics be implemented?
Consider how easy, or how difficult, it is to make changes to your marcom decisions using the model. The more directly the model reflects how media is actually planned and bought, the easier it will be to implement. When a model is overly theoretical, it is too hard to buy according to its recommendations and opportunity is missed.
Implementing an inaccurate model, built on incomplete or unreliable data, and having a poor alignment with actual buying practice can leave you worse off than having no model at all (future blog post!).
Carefully test your model on all these questions, and you will find you are more likely to reach your objectives, instead of finding yourself teetering on the proverbial cliff.