Google Analytics is not going to tell you if your marketing is working. And neither will Meta, Amazon, LinkedIn, or anything else you try. At least not on its own.
Which is why people are suddenly getting into the old-school idea of “marketing lift.”
Put plainly, conversion attribution in 2025 is an absolute mess. But even without privacy laws and the death of third-party cookies, digital marketing would still be a pain to track.
People will start searching on their laptop and make a purchase on their phone. They’ll see a digital billboard and then remember your brand’s name in the shower four days later and Google it, and then it’ll show up as organic traffic. Or they’ll ask some AI to make recommendations, click the links, and then it’ll show in GA4 as “direct/none.”
None of that helps when your CFO wants to know if marketing is driving revenue.
That’s where marketing lift comes in. This is how you prove the incremental impact of specific campaigns. And it’s the best way I know to prove ROI, earn trust, and make good budget recommendations.
But let’s talk about the basics first.

Do you think Big Milk would have spent so much on ads if it didn’t have a way to measure it, even way back then?
What does marketing lift mean?
Marketing lift is the extra. It’s all the sales, signups, or revenue that wouldn’t have happened unless your specific marketing campaign existed.
If you turned off all your campaigns tomorrow, what do you lose? The answer is your lift.
This is where the related concept of incrementality comes in. The extra conversions or sales are your lift, but incrementality is how you measure the actual causal relationship. And a lot of the time, you’ll do this by comparing the results from a test group (who is exposed to a marketing campaign) against a control group (which is not).
So lift is the difference in outcome, and incrementality shows the difference in outcome that is specifically, provably due to certain marketing activities.
But you don’t have to totally “get” the distinction I’m making here to understand the larger point. The basic idea is that you want to know what will happen if you put another dollar into marketing. And that’s what lift and incrementality help you understand.
Like what you’re reading so far? Subscribe for more!
Why is marketing lift so important in 2025?
Marketers are absolutely drowning in numbers, and a lot of them do not have meaning.
Case in point, if someone clicked a retargeting ad 30 seconds before buying, the ad system you’re using will give the ad all the credit. Which is nuts, because it’s a retargeting ad. Meaning something else already primed them to click.
Also going on at the same time, platforms like Meta, Google, and Amazon have tightened up their grip on user data. And that makes it well nigh impossible to track the full customer journey outside a handful of walled gardens.
There is a lot of information out there in the digital marketing world that was written in the mid-to-late 2010s based on a certain set of assumptions. Assumptions that data is easy to come by.
Those assumptions aren’t true any more. But a lot of the advice still gets repeated.
So the keenest marketers I know are evolving. They’re running experiments. They’re dusting off their old marketing textbooks and focusing on Marketing Mix Modeling (MMM). And they’re running different kinds of incrementality tests, even if they’re just simple A/B tests.
These concepts are way older than you’d think. They went out of vogue for a while because new tech made them look irrelevant. But make no mistake, the tech giants are using these concepts to their advantage right now. For example, Adobe rebuilt their measurement approach around incrementality and saw a 75% lift in their contribution to subscription growth.
So it’s worth knowing how to measure lift the old-fashioned way.
How do you measure marketing lift?
I can think of four major ways that marketers can work toward better tracking lift, so I’ll break down each of these below.
1. Marketing Mix Modeling (MMM)
Marketing Mix Modeling is having a moment in the sun right now, and it’s well-deserved. Dating back to the mid-century, MMM uses historical, aggregate-level data to measure how much each channel contributes to sales, leads, or other outcomes over time.
Or to put it more straightforwardly: instead of tracking consumers, it figures out what they’re doing with statistics. That makes it especially resilient in a world where privacy is taking much more seriously.
Another beautiful thing about MMM is that you can track TV, social media, paid search, and even external factors like seasonality and weather. A lot of these models will use weekly or monthly data, but AI-driven tools like Adobe Mix Modeler might speed up the pace a bit.

There are a lot of questions you can ask that MMM is better equipped to answer than just about any other method:
Did that TikTok campaign really boost search traffic?
What happens if we double our ad spend on Hulu?
What mix of marketing channels gives us the best ROI?
The big fallback is that MMM is very long-term in nature, and it’s not going to help you make daily decisions. This is the kind of modeling that helps on year-long time frames, and you’ll need a good understand of statistics, as well as some solid software, to help you do it well.
2. Incrementality testing (experiments)
If you don’t have a big enough marketing team to go headlong into MMM, you can still run incrementality tests. And really, these are the gold standard.
Here’s the idea: you run controlled tests. One group sees a campaign and the other doesn’t. And you measure the difference in outcomes between the two.
There are a bunch of ways you can go about this:
Geo lift tests: Run ads in one city, hold out another
Time-series on/off testing: Rotate campaigns weekly
Holdout audiences: Exclude a random 10% of your email list
The incrementality testing software, Lifesight, uses the following example to illustrate how this could work. In their scenario, “a direct-to-consumer (DTC) beauty brand pauses paid social ads in 20% of its U.S. DMAs (Designated Market Areas) for four weeks, while maintaining normal ad spend in the remaining regions.”
In this scenario, the brand sees a resulting 12% drop in sales in the holdout audience. And that right there is proof positive that the ad campaigns truly were having an impact that cuts through any inflated ROAS figures their primary tools might be showing them.
But don’t go thinking you have to have a huge team to do something like this. You could just as easily send to only 90% of your email list with a bit of segmentation magic. Or you could turn off retargeting in a few zip codes and see if it makes a difference.
The point is that you want to have some sort of system in place to see what the true difference is when you take one action over another.
3. Multi-touch attribution (MTA)
I’ve spilled a lot of ink talking about the shortcomings of conversion attribution, but make no mistake: attribution still has a role. It’s helpful for spotting patterns, if not proving impact.
You might notice that users who watch YouTube then click a branded search ad tend to convert. That’s useful information and you can act on that.
But you need to pair it with an experiment to validate whether YouTube is driving that behavior for sure. And to that end, MTA can be helpful in places like B2B where there are fewer touchpoints and you’re more likely to be using a CRM.
The main value here is that you can use MTA to help you generate hypotheses worth testing!
4. Unified measurement
This is a fancy term for using multiple data sources and measurement methods to figure out what’s really going on. And it’s a smart thing to do.
MMM is really good for long-term decision making on the quarterly and yearly scale. Experiments give you a sense of causality on a weekly and monthly scale. And attribution tells you what’s happening in days, hours, and even minutes.
None of these models is “truth” per se, but then again, an ultrawide lens is not more “truthful” than a telephoto lens on a camera. Point is, you need different models to see different things.

“All models are wrong, but some are useful” is such an old saying in the data analysis world that you can get it on a throw pillow for 20 bucks.
How do you use marketing lift in practice in a small or mid-size team?
Up to this point, I’ve made a pretty good case for why marketing lift matters. But let’s be honest—small and mid-size businesses are not playing with huge budgets and time frames much of the time. Sometimes, you have to focus on triage over optimization. And even when you are optimizing, you have to get results fast enough to make smart calls when you need them.
So here are six tips you can use to measure marketing lift even if resources are limited.
1. Start simple.
You don’t need an enterprise-grade data science team to measure lift. Start with what you control and what you understand.
One of the easiest places to begin is email. Randomly hold back 5–10% of your list from a promotional send.
Then watch and learn: did the people who received the email convert more than the holdout group? If the answer’s yes, then you just tracked lift!
Same goes for paid channels. Pause a small geographic area or zip code for a few weeks. Compare performance against the rest of your market. Even a modest difference in conversion or revenue gives you a baseline to work from.
The key is to hold something out and compare. Imperfect, small-scale experiments repeated consistently is far more important than coming up with one beautiful model to rule them all.
2. Use the platforms you have.
If you’re running ads on Meta, Google, or LinkedIn, you already have access to built-in lift testing tools. Meta’s Conversion Lift, Google’s Geo Experiments, and LinkedIn’s brand lift studies all let you run experiments directly inside the platform.
But don’t take their results at face value. These tools are grading their own homework. Use them to run tests, then sanity-check the results. If Meta tells you your ROAS is 5x but your revenue didn’t budge, then you should likely disregard the result.
3. Track first-party data.
Even with cookies going out of style, you can still keep your own data clean. Keep up a CRM, or even a simple spreadsheet to track sales and leads. Keep your financial data clean and pay attention to your web analytics too. It’s all good raw information.
If you’re not already segmenting traffic by source and tagging campaigns with UTM codes, start now. Yes, sometimes parameters get dropped, but it’s better than nothing!
The more visibility you have into where customers are coming from and what they’re doing, the better decisions you can make.
4. Run regular tests.
Measuring lift is more about building good habits than running reports.
Try a new marketing experiment every couple of weeks. Always write down what you expect to happen and what you’re trying. Then write down what actually happened and why you think it went that way.
You want to create a feedback loop.
5. Use MMM lite.
If you don’t have the budget for a full-blown Marketing Mix Model, make a miniature one with spreadsheets.
Map out your weekly spend per channel and your weekly sales. Look for correlations. Did sales rise when spend went up? Did they lag after certain campaigns?
This is, of course, not a perfect method. But perfect isn’t necessary. Even simplified models can help you spot patterns and run smarter tests. If you want to kick it up a notch, you can always try tools like Rockerbox and Measured.

6. Prioritize biggest spenders.
When in doubt, start measuring lift for your most expensive or time-consuming channels. If social media takes up all your time, test that first. If you’re running huge ad budgets, test those first.
Run enough tests, and one of two things will happen: you’ll either find out what you’re doing works or it doesn’t. If it works, you can scale it up. If it doesn’t work, you can cut budget and try something else. You win either way!
What if you can’t tell why a marketing campaign is doing well?
Lift tells you what happened. But it won’t always tell you why.
As much as you need quantitative data, you can’t forget about the qualitative. Asking for feedback and sending out surveys can help you understand the story behind the numbers. And this can help you understand if your story really connected with people or if your discount got price-sensitive buyers off the fence.
Understanding people’s motivations for buying from you—or not—can help you better calibrate your next batch of experiments. Numbers alone just can’t do that for you.
And if you have absolutely no idea where to start? Check out your online reviews and just start reading them. The formal name for this is sentiment analysis and, if you haven’t done it in a while, it’s probably going to be one of the best things you do all day.
Final Thoughts
You can’t track your customers’ every move. It seemed like you could a few years ago, but that world was an illusion built on weak privacy laws and false assumptions.
You never really know what flips the “buy this” switch. A TV spot might lodge in someone’s brain and nudge them to Google you days later. Or maybe they saw a blog post, remembered your business card, and typed in your number directly.
No attribution model can follow that scent.
But marketing lift can. It helps you measure the seemingly unmeasurable.
Need help marketing your business?
Or just need someone to bounce ideas off of?
Book 30 minutes with me and we can chat!
(Yes, it’s free.)