Most digital marketers shoot from the hip. Launch a campaign, tweak what feels off, and hope it works.

And don’t get me wrong—this can work. If you’re observant and you know how to use marketing tools, you can often make a fair sum of money for your employer or your clients doing this.

But in doing so, you have no clear hypothesis, no control group in case things go sideways…and no repeatable system if they go right.

The scientific method—yes, the same one you learned in middle school—can help you bring structure and clarity to this chaos.

Science isn’t just about lab coats and beakers. Strip away everything you’ve learned about Galileo and Darwin and Hawking, and what you’re left with is a way to ask sharp questions, test your assumptions (hypotheses), and analyze the results in a repeatable way.

And this right here is how you find out what really works.

This is not a new idea. Check out that publication date: 1923!

In this post, I’m going to show how this process plays out in the field, including how it helped me untangle a super messy Google Ads account and figure out if Beehiiv boosts were worth the money.

But first, let me play schoolteacher for a moment…

What is the scientific method?

The scientific method is a looping process that you can use take your curiosity and your hunches, and refine them into proven knowledge backed up by facts. I think of it as five steps.

  1. Ask a question.

  2. Gather information through observation.

  3. Come up with a hypothesis to explain what you see.

  4. Test the hypothesis with an experiment.

  5. Analyze and interpret the data.

When you’re done, you use step 5 to find new questions for step 1.

This might be the best feedback loop humanity’s ever invented for making ourselves smarter. To apply it to ads, you can:

  1. Ask: What do we want to improve? (Cost per conversion? Other metrics?)

  2. Observe: What was past ad performance like?

  3. Hypothesize: Come up with a falsifiable statement worth testing, such as “if I alter landing page, conversion rate will go up by 20% and reduce the cost per conversion.”

  4. Test: Create an A/B test, sending half of traffic to version A of the landing page, and one half to version B.

  5. Analyze and interpret: Write up a report, compare results against the baseline, share it with the team, and decide what to do next.

Some may balk at the additional work that goes into documentation. But to me, this is the single easiest way to get clarity and structure.

Like what you’re reading so far? Subscribe for more!

This might be the best feedback loop humanity’s ever invented for making ourselves smarter. To apply it to ads, you can:

  1. Ask: What do we want to improve? (Cost per conversion? Other metrics?)

  2. Observe: What was past ad performance like?

  3. Hypothesize: Come up with a falsifiable statement worth testing, such as “if I alter landing page, conversion rate will go up by 20% and reduce the cost per conversion.”

  4. Test: Create an A/B test, sending half of traffic to version A of the landing page, and one half to version B.

  5. Analyze and interpret: Write up a report, compare results against the baseline, share it with the team, and decide what to do next.

Some may balk at the additional work that goes into documentation. But to me, this is the single easiest way to get clarity and structure.

Why use the scientific method in advertising?

Marketing feels like art, and in many ways, it is. But it behaves like science. And I’ve found this to be particularly true in numbers-heavy marketing work such as advertising and cold email.

Live ad environments are packed with variables. Copy, creative, audience, budget, timing, and placement—and all of these factors, simple enough to understand on their own, can interact in weird, unpredictable ways.

So why rely on your gut alone?

Using the scientific method can help you cut the bias out of your decision-making and remove the temptation to make emotional decisions based on short-term performance. You can test your ideas deliberately and systematically, and understand what’s working and what’s just noise.

Also consider: the extra dollop of rigor can help you find technical problems that might otherwise go unseen, like those of conversion attribution.

It’ll help you with office politics, too. You can say to your boss or your client: “this headline consistently outperformed all others across three audiences” instead of just “this ad did well.”

And if you’re running a team of marketers, you should know that following the scientific method is a great way to build accountability. Instead of relying on end results (or alternatively, vibes) to see whose work is quality, you can see their methods and their attention to detail.

Even if your budget is tight or time is of the essence, this is still the way to go. You can break uncertainty into a series of small, manageable decisions. And if you do it consistently, you build up a body of evidence that improves every campaign you run going forward.

How do you apply the scientific method to advertising?

I want to tell you exactly what I mean when I say “apply the scientific method to advertising.”

To do this, open up a Google or Word doc and start taking notes as you work on your ad campaigns. Below, I will give you a list of steps to follow as your write your document.

1. Define the goal.

Start with a single, clear question: What are you trying to improve?

Maybe you want more conversions. Maybe you’re chasing a lower cost per acquisition. Or maybe your CTR is flatlining and you’re hoping to revive it.

The exact question doesn’t need to be big. It just needs to be specific enough to measure.

Keep it simple and focus on just one metric you can influence within the bounds of a single experiment. That focus will guide everything else.

2. If not starting from scratch: record current campaign status.

If you’ve run ads before, don’t waste that data. Start by documenting where things stand:

  • Total spend

  • Daily budget

  • Campaign goal (e.g. leads, purchases, awareness)

  • CPC, CTR, CPA, ROAS

  • Impressions and conversions

Drop all this into a table within your document. Use rows for campaigns or ad groups, columns for each metric.

Don’t feel the need to make it fancy. You just need a snapshot of what’s working and what isn’t.

I mean it when I say it doesn’t have to be fancy. I’m using two different shades of blue for the data here!

3. Make observations.

Now look at your data with fresh eyes. What’s obvious?

  • One audience might convert well, but cost too much.

  • Your highest-CTR campaign might not bring in leads.

  • Some keywords might eat budget but never convert.

You don’t need to do deep analysis here. Even a quick scan can tell you a lot.

Jot down anything that seems off, surprising, or promising. You’re setting the stage for a test.

4. Form a hypothesis.

Use your observations to write a single, testable prediction. Like:

  • “If I change the ad headline to include a price, CTR will increase.”

  • “If I use phrase match instead of broad match, CPA will go down.”

Keep it narrow. Don’t test six variables at once. Think like a scientist: isolate one change, and predict the outcome.

5. Define the experiment.

With your hypothesis in hand, sketch the boundaries:

  • What exactly are you changing?

  • What stays the same?

  • How long will you run the test?

  • What does success look like?

Example: You might run two identical ads with different headlines, spend $100 on each, and compare the CTR after 10 days. That’s your experiment.

6. Run the experiment.

Launch and walk away.

Don’t check it every day and don’t fiddle with it unless something goes horribly wrong and it’s burning money.

Let the test run clean unless there is a major issue.

You want clean data, not the kind that becomes unclear thanks to endless tweaks.

7. Record your observations.

Check in while the test runs, but not obsessively. Are you seeing results early? Are impressions slowing down? Is one variant clearly leading halfway through?

Even quick notes—like “CTR jumped after 3 days” or “impressions dropped over the weekend”—will help later.

8. Analyze results.

Once the test ends, pull your numbers and compare them to your baseline. Was your hypothesis right? Did the change actually move the needle?

Don’t just look at top-line performance. Look at the context. Maybe CTR rose but CPA got worse. Or maybe the new ad worked great for one audience but not another.

And then at this point, you repeat steps 3 through 8.

The loop is simple, and that’s exactly why it works.

But this is a bit abstract, so let me give you an anonymized example from the field.

Here’s a real example of how I used the scientific method to improve cost per qualified lead on Google Ads.

There is one client that I’ve worked with who had underperforming Google Ads. Conversion tracking had been broken for months. So much so that we couldn’t even manually tally form submissions with UTMs. We effectively had no baseline for ad performance with respect to qualified leads.

So after fixing conversion tracking, we started rigorously testing with the basics. We documented each campaign’s status—CTR, CPC, conversion rate (raw leads), spend—and wrote specific hypotheses. For example: “If we switch to lower-competition keywords in electronics, CPC will drop and the number of leads will rise.”

From there, we ran controlled tests. We paused unproductive campaigns. Rewrote ad copy and changed keyword match types. We experimented with negative keywords. And we also reallocated budget toward what showed promise.

We applied changes step by step so we could see how the ads would respond.

Initially, we were seeing that the cost per qualified lead was $850, which was way too high. So we set a target of $250 or less.

Within six weeks of rigorous testing, CTR rose from 3.6% to 5.6%, suggesting that ads were lining up better with search intent. Overall cost per qualified lead, at the end of the initial six-week testing period dropped below $300. Toward the end, there were three particular campaigns that were outperforming qualified lead benchmarks (at $93.60, $115.09, and $168.00 respectively), and another that was high but not astronomically so ($369.16).

We’re still iterating, but the trend is clear: we’re closing in on the $250 target.

Here’s another example of using the scientific method to vet Beehiiv Boosts.

I’m currently experimenting with Beehiiv Boosts to grow this very blog and newsletter that you’re reading right now. The way the system works is that you pay Beehiiv per every verified subscriber that you earn from other similar newsletters.

My hypothesis is that these subscribers will be reasonably engaged with a 40% open rate. And I’m testing this with a small, deliberately capped test budget of $25 per week.

In my initial experiment, still ongoing, I set my target cost per verified subscriber at $2.50. Beehiiv’s system lets other newsletters apply to promote you, but you have full approval rights. That’s a huge plus for brand protection. I’ve already declined quite a few crypto and AI-heavy publications.

That said, there are caveats. You need to preload funds into your Beehiiv wallet, and those deposits aren’t refundable. Leads marked as “pending” tie up funds for 10–17 days, and while most emails look real, I don’t yet know how they'll perform. I also noticed a few questionable subscribers slipping through, so vetting who promotes you really matters.

I’ve also learned that restricting to U.S.-based emails dramatically improved quality.

So far, it feels promising, especially compared to Facebook or LinkedIn acquisition costs. But I’m holding off on any recommendations until I’ve seen how these leads behave long term. More to come as the experiment continues.

Even small bets like this can teach you something, but only if you track the right signals and give them time to reveal themselves.

Final Thoughts

Perfection is impossible. But consistent, structured learning is absolutely within your grasp.

With every single marketing test you run, your instincts will get a little keener. You’ll collect a whole lot of data, and you’ll get a clear picture of what really works. And over time, the steady improvement you see will compound.

The process might feel “extra” at first. Documentation feels like a chore—but it’s one of the highest-leverage habits you can build.

For small teams and solo operators, the benefits of treating marketing as science are even greater. You don’t have the luxury of wasted spend or murky results.

Clarity is leverage. And the age-old scientific method is an excellent way to get clarity on demand.

Need help marketing your business?

Or just need someone to bounce ideas off of?

Book 30 minutes with me and we can chat!

(Yes, it’s free.)

Reply

or to participate

No posts found