ROAS Explained: How We Got a Campaign to 17x Profit
Seventeen.
Not percent. Seventeen dollars back for every dollar spent.
I’m going to be slightly annoying and tease a caveat: a ROAS number is only as real as the measurement system underneath it. I’ll explain why a little further on, but keep in mind that everything I’m about to tell you depends entirely on accurate ad attribution.
Now that my conscience is clear, I present the flashy details.
We recently ran a campaign that moved from a pre-optimization ROAS of 8 (already pretty good) to a post-optimization ROAS of 17.
I promise my point here is only minimally to brag. What I want to do with this article is, in broad strokes, teach you how to copy my results, explain what ROAS is, what it is not, and why most ROAS-related advice you see online usually false-starts in its approach and conception.
What ROAS is, and why it only works when your tracking is honest
ROAS stands for “Return on Ad Spend.”
It’s the revenue your ads generated divided by the amount you spent running them. If you spend $1,000 and you can reliably attribute $2,000 in revenue to that spend, you have a 2x ROAS. Importantly, ROAS is ad revenue, not business profit. If you want real profit ROI, you still technically have to subtract your cost of goods and other operational costs, but for educational purposes here, I’m abstracting those factors away.
While ROAS is easily grasped conceptually, its implementation is often less so.
Insightful ROAS metrics depend on three things being true at the same time:
You can track conversions correctly.
You can assign a value to those conversions.
The platform is receiving that value in a way that matches what happened in the real world.
If any of those are compromised, ROAS becomes an impressive-looking number that tells you nothing of use.
Why I’m So Annoying About Ad Attribution
A lot of teams treat purchase value like one metric among many. It is not a metric, it is the metric. It can be a pain to set up if you don’t have technical web experience, but without it, you are effectively flying blind.
Bad data is just as bad as no data. If a platform is receiving distorted purchase values (because of broken event wiring, bad or incomplete API packet fields, missing taxes or shipping, duplicate events, partial data, offline conversions not being uploaded, etc.), you can end up optimizing on a mirage. You will “scale” what only looks efficient because the measurement system is undercounting cost or overcounting revenue.
For us at CaJu, the tracking layer is part of the creative process because it determines whether your gut feelings or hunches about art direction have any basis in reality. Justifying your work becomes much easier if you can show someone, to the dollar, how much money you’ve made for them.
I want to get ahead of an objection here. There are industries, like the one we work in, that sell a service or a product with relatively low rates of return customers or intrinsically high churn that cannot logically attribute a singular dollar amount to an online action.
The trick is to rework your lead generation pipeline into a value model.
If you are in real estate, digital marketing, or any other high-ticket services, B2B, or anything where your conversions are leads, you can estimate attribution value using expected value:
Lead Value = (Lead-to-close rate) × (Average gross profit per closed deal)
A loose example: if roughly 3% of your leads close, and the average gross profit per deal is $10,000, then an average lead is “worth” $300 in expected gross profit. That is the number you can feed back into your reporting and optimization, realistically by importing offline conversion outcomes, so the platform learns which leads turned into real revenue.
If your business is, say, a bar, you may need to get creative about how you’re identifying which patrons are there because of one of your ads or forms. My advice would be to return to basics: “mention this ad” or “scan this code for a coupon” in return for a small discount. Use different codes and coupons for different ads so you know which brings in which patrons. Then just make sure to capture the total value of those patrons’ purchases and upload them later as offline attributions to the appropriate ads.
Two small, final, and mighty notes on attribution.
First, you have to be conservative. Inflate your attribution value, and you will convince yourself you have a money printer when you may have a money furnace.
Second, the goal is not to be perfectly correct on day one. The goal is to have a model that becomes more accurate as you collect outcomes.
What is a “good ROAS”?
The standard answer is: it depends on your margins, your goals, and your industry.
The practical answer is: for most businesses, anything above 2x ROAS, so long as it remains steady, can be scaled to the point where it becomes “good.”
It does depend, though. A 2x ROAS can be excellent for a business with strong margins and high lifetime value. It can be unprofitable for a business with thin margins, high fulfillment costs, or heavy overhead. This is where ROAS can get sloppy because many people confuse “revenue back” with “profit earned.”
A useful way to ground this is break-even ROAS. “Break-even ROAS” means the minimum ROAS you need so your ads do not lose money. Your “margin” is the part of each sale you keep after product costs and other costs that scale with sales. If you keep 40% of each sale (40 cents per $1), then you need $2.50 in revenue for every $1 in ad spend to break even (2.5 ROAS). If you keep 25%, you need 4.0 ROAS. If you keep 70%, you need about 1.43 ROAS.
Now that said, if I were a cautious educator, I would say that the ROAS that works for you takes a lot of time and study to determine, and that’d be true. However, I’m only cautious when it’s advantageous to be so, and for ads, 90% of the time, if your ad is doubling the money you’re putting into it month after month, it’s scalable. So caution be dammed, 2x ROAS is good.
More on that scaling method… right now!
The method, sans the magic
I’m going to try to describe the way I run ad optimization at CaJu as best I can while still retaining our trade secrets.
In a word, our method is a loop.
Research, content, test, optimize, scale, analyze, iterate.
The loop prevents treating personas, SEO keywords, ads, social media content, and performance metrics as isolated artifacts. They are better understood as each being a kind of measurement instrument pointed at a market.
Step 1: Persona research, then a geography hypothesis
We start with persona research. If a business has customer data, we use it. Purchases, booking logs, social media demographics, email engagement, CRM notes, anything. Our goal is to base customer types on real data points as much as possible.
Rather than testing persona assumptions, we turn persona data into a physical hypothesis. Where are these people, literally? What neighborhoods, what cities, what travel patterns, what daily constraints, what devices, what time windows?
In the past, I’ve also looked up publicly available census data on locations where customers cluster, so get a better idea of the average person in that area. For you data heads out there, I did that using the tidycensus package in R. If your customer or existing audience data contains a geolocation feature, merging the two data sets together using that feature as a primary key can be highly revealing.
Maybe I’ll write more about that process in a future article.
Step 2: Content marketing as a free testing lab
Before we get paid media to do heavy lifting, we use organic media as the sandbox.
You should think of organic posts, partially, as cheap experiments. They let you test:
Which visuals resonate.
Which frictions are top of mind.
Which phrasing earns trust.
This not only gives you some early indications into what gets people to click and engage, but also teaches you how your audience interprets the brand as a sign system; what “world” or community they imagine they are entering when they buy.
And because we track content performance using the scientific content framework I outlined in a previous article, we can derive quantitative signals from qualitative data, which we use to inform our paid strategy rather than keeping them siloed in the social media department.
I make a habit of telling people, usually unsolicited, that this is based on a statistical axiom that is important to be aware of. It’s an assumption I cannot prove with data, but it’s a leap of faith that enables the previous method to work: If your social media posts are authentically representative of your brand aesthetics, values, and offer, then your followership on that social media can be sampled as if it were equivalent to your customer population (the audience you’re trying to target with your ad campaigns).
This is an easily overlooked and technical statistical point, but it’s an important one to grasp consciously. If something resonates with your followers and your followers follow you because they like your posts, and your posts are an accurate representation of your business, then your followers are likely also your customers, and ads that look like your social media posts will resonate with that demographic.
Step 3: Paid testing designed to produce clean comparisons
Once we have a creative direction, a handful of plausible ad angles, and a clear audience hypothesis, we build a testing set.
The key is that the set is structured to spend a minimum, evenly distributed amount across ads so we can compare them under similar conditions. We started with a small testing budget, deliberately.
This takes more discipline than you might imagine. The key is to go slow, to experiment in an environment where the stakes are low, and to watch for microscopic success signals in the small amounts of data.
Step 4: Reading the early signals without lying to ourselves
Small budgets often produce sparse conversions, but plenty of other metrics.
We run performance through a “near-conversion” metric evaluation algorithm I developed in order to look for early signals that correlate with eventual winners: click-through behavior, landing page engagement quality, frequency issues, and other “is this ad doing something?” indicators.
Even so, it’s important to stay skeptical. This helps me identify “potential-winners,” not the horses I want to bet the farm on.
Step 5: “Trimming the fat” a few times
So I gather the info from the research, run the ad sets, and look for micro-signals of potential winners. Once I’ve identified those potential winners, it’s time to cut the others loose.
I create a series of ads targeting different paint points in different ways, all informed by the target audience and their aesthetic preferences. For each ad, I test a couple of design variations along with the maximum allowable headline, primary text, and description options. For each painpoint “angle,” I group together ads into sets. If I’m still willing to experiment with audience targeting, I aim the sets at different audiences. This is also the level where the spending minimums and maximum spending need to be equally set to make interpreting performance easier and more accurate. I collect all the ad sets under a single campaign that receives a lifetime budget amount and will divide evenly among the sets within.
I run the campaign for a week or so and look over the results.
Ad sets with high reach and little other metrics can be immediately cut without further consideration. Duplicate the campaign sans the underperforming sets.
That’s your first fat trimming. You should now have fewer ads in your set. Run it again over the same period of time.
Now, all of your ads should have conversions on them. This is where ROAS really starts to matter. Cut all ads within your set with no conversions; furthermore, cut all but your top 1 or 2 performing ad sets.
That’s your second fat trimming. You should start to notice your ROAS, click-through rates, and CPC rise even if your reach declines. This is good. Remember, you pay for all the eyeballs that see your ads, even the ones who don’t click. It’s okay if fewer people overall see your ads, so long as the ones who do convert at a high rate.
This is also a good time to reassess your audience delivery metrics. If you notice one of your sets is being delivered equally to men and women, but women convert far more often, you might want to adjust the target audience to only target women.
The age distribution of conversions should look roughly bell-shaped, which is to say most of your conversions will hover around an average value somewhere in the middle. Statistically speaking, here you want to “trim the tails.” Reduce delivery to age ranges that receive impressions but convert rarely. That likely means bringing the minimum age range of your target audience up and bringing the maximum down.
Check the ad delivery location info if available on your platform. The same logic applies. If there are areas where delivery rates are high but conversion rates are low, you may want to exclude those areas.
The trick here, again, is to “trim the tails.” Only exclude the extremes where reach is relatively high, and conversions are nonexistent. You want to find a nice middle ground between cutting the extremes without cutting too much into your total conversions. If your conversions are 1:3 men to women, keep delivering to both. But if 1 out of 1,000 conversions comes from the 18-24 age range, definitely exclude those youngsters.
Repeat this logic across location, device, platform, placement, age brackets, and gender identity. The point is not to stereotype, but simply to stop funding low-conversion segments based exclusively on the data in front of you.
Make the adjustments and cuts. Duplicate the campaign and run it yet again.
This is the part that makes all the difference. The goal now is to keep the existing ad sets within the campaign, but to make the sets better by trimming the fat internally.
This is the third and final fat trimming. Open up the sets and look at the ad elements themselves. If you were A/B testing creative design, check if one worked significantly better than the other. Check if there are headlines or primary texts that are top conversion performers. Check if there are aspect ratios that the platform simply isn’t delivering. Cut all those.
Step 6: Scale, baby, scale
At this point, you should have an optimized campaign, without a speck of fat on it, that you can run on the same exact budget as the first round of tests, but that will perform far better. Since you’ve gotten rid of all the sets and ad elements that don’t convert, you won’t be paying for the platform to deliver them. That lets the platform use the same amount of resources much more efficiently.
That is how you take a campaign from a ROAS of 2x or 3x to 15x or 20x.
When I get to this point, I rerun the new optimized version of the campaign one last time with the same budget as all the other tests to confirm an improved ROAS rate. If ROAS doesn’t improve, I simply go back one round and reassess.
If ROAS does indeed improve, then you’re off to the races. Over the course of the testing, due in no small part to working within a low-spend environment, you should have already surpassed a ROAS of 2x. By the time you’re done optimizing the campaign, you should far exceed that.
This means it will make several times more than it costs to run. That means it will not only pay to run again in the following cycle, but it will bring in enough returns to increase the budget size.
Step 7: Recognizing Too Much of a Good Thing
No good thing lasts forever.
ROAS almost always declines as you scale because audiences saturate, frequency rises, and the offer stops feeling new. New purchases eventually start to run up against your churn rates and cooldown periods.
If you monitor ROAS closely, you can anticipate its decline rate and slow the rate of budget growth to match until you settle into an equilibrium or decide to end the campaign’s run. The nice thing about this method is that by this point, you’ve already made more than enough to fund a new round of testing.
This is why conversion value accuracy matters so much. Compounding only compounds if the measurement is conservative and credible. If you are generous with attribution, you run the risk of scaling a fantasy.
If you estimate conversion value conservatively (where you don’t have precise data) and you still clear your break-even ROAS with margin, you have room to scale responsibly.
Last word, same as the first word
The job of optimization is to concentrate spending on the subset of the market for whom your story resonates.
Do not neglect the research and content phases. The better ROAS you start with, the better ROAS you end with. If you only optimize based on metrics the ad platform provides, you are trapped in what the platform can measure.
Platforms can tell you which creative got a higher CTR. They cannot tell you why an aesthetic feels trustworthy in your category, or what cultural cues your customers find revolting.
Social content, done honestly, gives you that layer. It can be free focus testing for your paid media if done skillfully.
If you want some personalized advice on your ad campaigns or conversion setup, reply, comment, or DM with your industry and the platform (Meta, Google, or both), plus your rough margin and your conversion type (purchase vs lead). I’ll tell you what a “good ROAS” means for your business model, and how I think you should do it.
And with God as my witness, I will do it for free.
Until next time, stay fresh.
- Casey