Once upon a time, marketing attribution felt easy. You’d fire up Google Analytics, glance at the conversion report, and see that your latest ad campaign drove 437 sales.

You didn’t overthink it. The numbers made sense.

That world is gone.

Today, attribution is murky by default.

Third-party cookies are on their way out. iOS devices are blindfolding your tracking pixels. Facebook’s data might come out in a form that disagrees with your CRM, which in turn disagrees with your internal database.

The result of all these changes isn’t just messy data—it’s structurally unreliable data. The sort you can’t use for decision-making without a lot of “um, actually” caveats.

In 2025, marketers are having to deal with the difficult reality that no single system tells the whole truth. Attribution windows just don’t line up. Data pipelines that have worked for years are broken. Platforms guess when they can’t track, and they’re not even very transparent about when they’re guessing. Even the fixes come with trade-offs.

But you can’t just throw your hands up and resort to guessing. Like it or not, you need metrics to make choices about budget and scope.

This is why the best marketers aren’t chasing perfection. They’re building redundant systems, triangulating what’s working, and accepting uncertainty as part of the game.

In this post, I’m going to unpack what broke attribution, why the fixes fall short, and how to build a more resilient, honest approach to measuring what works.

Why is marketing attribution so bad in 2025?

I’m not going to sugarcoat it: the ridiculously good attribution tracking we had in the 2010s was built on unsustainable privacy practices.

Privacy policies have changed dramatically—and as consumers, we should celebrate. Less of our personal data is being sold.

But for marketers, like myself, who came of age in the 2010s, we’re having to learn how to market the way they did in the…before times.

Suffice it to say that between 2018 and 2025, marketing attribution has taken beatings from every direction. First there were GDPR and CCPA, both of which put tight limits on data collection. Apple’s App Tracking Transparency (launched in iOS 14) kneecapped in-app tracking.

Browsers like Safari and Firefox blocked third-party cookies years ago, and now Chrome is joining them in phasing them out entirely by the end of…whenever they get around to it (but probably soon).

At the same time, the platforms marketers depend on for data—Meta, Google, Amazon, TikTok—have walled off their ecosystems in some way or another. So as you can imagine, each one measures success differently, uses its own attribution windows, and keeps its data on lockdown. One sale can get credited to three different channels, and none of them are lying. They’re just counting differently.

Add to that the rise of ad blockers, short cookie lifespans, and fragmented user journeys across devices. In other words: if someone sees your ad on mobile, clicks an email on their tablet, and buys on desktop, you may never connect the dots.

In short, no single tool sees the full picture anymore. Nor is that even possible to do at this point.

That means even the best data is partial. And as you can imagine, the more complex the buyer journey, the more holes show up in your reports.

And that’s before anything breaks.

Like what you’re reading so far? Subscribe for more!

Why is broken attribution a bad thing in marketing?

If you don’t know where your leads are coming from, you can’t strategize.

For that matter, you can’t even trust your metrics. In some companies, marketers are tempted to start optimizing the data that’s easy to measure, which may or may not be close to revenue.

That might mean overspending on branded search (which often gets credit for sales that were going to happen anyway) while underspending on awareness campaigns that planted the seed in the first place. Or you could end up cutting budget from Meta because reported ROAS looks weak—even if those ads are priming people to buy through some other channel.

It doesn’t take too much of this for budgets to get totally out of line with what works.

It also makes proving ROI harder. Stakeholders don’t want engagement—they want clear paths to revenue. But when Facebook’s numbers don’t match your Google Analytics, and your CRM tells a third story, your marketing starts to feel like Rashomon. So it’s difficult to defend your budget or make confident bets.

Subtitled screenshot from Rashomon (1950). Watch it if you haven’t!

Worse still, these gaps erode trust. When your reports are riddled with caveats, people stop listening. A weak attribution model can cause a boardroom to dismiss months of good marketing work.

In the absence of reliable attribution, teams often default to what feels right—or chase last-touch metrics that ignore most of the customer journey.

That’s not just inaccurate. It’s dangerous.

Why broken attribution fixes just don’t get the job done

There’s no shortage of attribution “fixes” in 2025. But every one of them comes with trade-offs.

Take server-side tracking. Moving tracking logic to the backend (via Facebook’s Conversions API or Google’s server GTM) helps bypass ad blockers and regain lost data. It can recover 30–40% more conversions that client-side tools miss. But it still requires user identifiers like email, phone number, login—and those aren’t always available. Server-side tracking is also technical and time-consuming to implement.

Google’s Enhanced Conversions help too, allowing marketers to send hashed first-party data to improve match rates across devices. But again: no login or identifier, no match.

Then there are modeled conversions. When tracking fails, platforms estimate what “probably” happened using machine learning. These show up in tools like Google Analytics 4 and Meta Ads. And while they patch gaps, they also make attribution murkier. They’re based on probabilities—not actual user journeys. And while I understand why platforms do this, it feels off when clients are trusting us with real budgets.

Marketing mix modeling (MMM) is back in vogue for this reason. It uses historical data to estimate how much each channel contributed to results without tracking individual users. Now this is great for big-picture strategy, but it’s slow, expensive, and doesn’t help you optimize this week’s campaign. As a slow-burn, evidence-driven marketer, I like MMM—though it does feel distinctly ‘un-digital’ because of how long it takes to generate insights.

Multi-touch attribution (MTA) aims to split credit across touchpoints. But that assumes you have all the touchpoints. And because of a lot of the structural factors I rattled off above, you often don’t. For B2B or offline-to-online paths, MTA misses half the journey anyway.

Even lift tests, arguably the gold standard, are limited. They’re accurate but expensive to run, and impractical to scale.

All these tools help—but none give you the full picture.

And that’s the uncomfortable truth: you’re never going to get perfect clarity. That was a briefly-lived illusion of the past decade.

What we are going to see instead are partial, probabilistic insights. And the best you can do is layer methods, stress test assumptions, and make peace with the uncertainty.

In the tech world, “Redundant Array of Independent Disks" (RAID) is a common way to protect data from a single hard drive failing. Marketers need to take a hint from the IT crowd!

Keep calm and learn to love redundant tracking

When Meta’s attribution pipeline glitched in late 2024, panic set in. Reported conversions vanished. CPAs spiked and bid strategies tanked. For brands that depended entirely on Meta’s native tracking, it was chaos.

But some marketers kept their cool because their systems had a little redundancy built in.

They were tracking the same conversions with Meta’s pixel, Google Analytics 4, backend purchase logs, and server-side events. They didn’t rely on one report and instead trusted their process of triangulation to get what they needed.

Redundant tracking doesn’t eliminate problems, to be sure. But it gives you some much-needed context when something breaks.

If you have UTMs, they can help you track channel source even if platform-side attribution is off. Server-side calls cover conversions blocked by client-side ad blockers. And internal sales logs help you validate real outcomes when external tools get noisy.

The brands that rode out the Meta outage had systems in place to double-check the story. And when conversions looked low according to Meta, they kept their cool because they were able to cross-check with other sources, fix their messaging, and move forward with data they trusted.

But hey, it's 2025—this is just how you have to approach attribution now.

The smartest marketers aren’t looking for a single source of truth

That means using multiple attribution methods: GA4’s data-driven model for day-to-day decisions, MMM for high-level budget planning, and lift studies for validating big bets.

You don’t expect the models to agree. To be perfectly honest with you, I consider a bit of disagreement to be a signal. When GA4 shows a spike but MMM doesn’t, it’s time to look into the finer data. When a lift study confirms what your MTA suggests, you can feel pretty confident about your recent marketing choices.

This is triangulation—finding confidence not from any one system, but from where multiple imperfect systems overlap.

This also requires a bit of a mindset shift, and not an easy one. You need epistemic humility.

That is, you need to know that all attribution is a model. And as the old saying goes “all models are wrong, some models are useful.”

The goal is not perfect precision. It’s approximate accuracy. You don’t have to be able to say that “this ad drove 53 conversions,” but “this campaign likely generated good revenue relative to cost.”

The brands that thrive in 2025 are the ones that can say, with confidence: we don’t know everything—but we know enough to act.

What can you do about broken marketing attribution?

The gap between what we can track and what actually happens is wider than ever. Smart marketers are filling that gap with better systems, better thinking, and better questions.

They’re uploading offline conversions—tying in-store purchases and sales rep follow-ups back to ad clicks, closing the loop where pixels can’t.

They’re segmenting by cohorts, not just campaigns. Instead of obsessing over last-click ROI, they’re asking: what’s the long-term value of customers we acquired through this channel? Who sticks around? Who refers others?

They’re running lift tests regularly. They’re building first-party infrastructure like clean CRMs.

And they’re asking better questions. Not “which ad caused this sale?” but “is this campaign making people more likely to buy over time?” Not “how do we prove every conversion?” but “how do we improve the total number of conversions we see?”

This sounds technical—and it is. But much more important is that these changes reflect a mature understanding of what attribution can and can’t do these days.

Final Thoughts

We used to have a sense of certainty when it came to marketing attribution. But we lived in a rare, weird time that has come and gone.

Tracking is patchy, models are probabilistic, and truth is harder to pin down.

But here’s the good news: you don’t need perfect data to make good decisions. You need good enough data, read with context, cross-checked for sanity, and interpreted by marketers who know how to think.

Redundancy isn’t wasted effort and humility isn’t weakness. Together, they’re the foundation of reliable marketing strategy.

You don’t have to know everything. You just have to know enough to act with confidence.

Need help marketing your business?

Or just need someone to bounce ideas off of?

Book 30 minutes with me and we can chat!

(Yes, it’s free.)

Interested in learning more about this topic?

I wrote an op-ed on Ecommerce Fastlane that goes into more details about how you can set up your conversion attribution stack. It’s called How To Track Ecommerce Conversions In 2025.

Reply

or to participate

No posts found