Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW 

Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW 

Manual CPC vs Target ROAS Experiment

We Ran Two Bid Strategies Back-to-Back on The Client’s Shopping Campaign

Here’s What Happened… 

Every PPC manager eventually faces the same crossroads: stick with the control you know, or hand the wheel to Google’s algorithm and trust it will find better conversions than you can manually. For the moving and relocation brand The Client, that question wasn’t hypothetical — we built an experiment to answer it with real spend and real data.

The setup was deliberately clean: 18 days, two phases, one shopping campaign. No changes to the product feed, targeting, or budget between phases. The only variable was the bid strategy. 

Why This Experiment Needed to Happen

The conventional wisdom in PPC circles is well-established: Manual CPC gives you control when you’re flying blind on data; Target ROAS hands the optimisation to Google’s machine learning once you’ve accumulated enough signal. But conventional wisdom and campaign reality don’t always align neatly.

The challenge is the transition itself. Switch too early, and Target ROAS has nothing to learn from. Switch too late, and you’ve burned budget on inefficient manual bids longer than necessary. The question for The Client wasn’t just which strategy is better — it was what happens when you sequence them deliberately.

Research from the PPC community supports a phased approach. BrightBid’s analysis of Shopping campaigns suggests Target ROAS performs best once a campaign has accumulated 30+ conversions per month; below that threshold, manual bidding typically outperforms it. Reddit’s PPC community echoes this: manual bidding for new accounts to gather conversion data, then automated strategies for profitability once the data foundation is solid.

The Experiment Structure

Phase 1 — Manual CPC (Aug 3–11)

The first nine days ran on Manual CPC. The goal wasn’t to maximize performance — it was to build conversion data. We monitored CPC, conversion rate, total conversions, and conversion value while keeping every other variable locked. This phase was the foundation.

Phase 2 — Target ROAS (Aug 12–20)

On day ten, we switched to Target ROAS, setting the target based on the baseline ROAS established in Phase 1. Product feed, targeting, and budget remained unchanged. From here, we let Google’s algorithm do its work — bidding higher for queries it predicted would convert at a higher value, and pulling back on those it didn’t.

MetricPhase 1: Manual CPCPhase 2: Target ROASChange
Clicks2,483645−74%
Impressions1,06,07925,780−75.7%
CTR2.34%2.50%+6.8% ✅
Avg. CPC₹0.64₹1.11+73.4%
Total Cost₹1,584.57₹714.56−54.9% ✅
Conversions6.846.93+1.3% ✅
Conversion Rate0.28%1.07%+282% ✅
Conversion Value₹10,751.26₹15,994.36+48.8% ✅
ROAS6.78×22.38×+230% ✅
Cost per Conversion₹231.60₹103.14−55.5% ✅

What the Data Is Actually Telling Us

The headline numbers are striking, but the story underneath them is more interesting than a simple “automated beats manual” conclusion.

Traffic volume dropped dramatically — 74% fewer clicks, 75.7% fewer impressions. At face value, that looks alarming. But look at what happened to everything else. With 75% of the traffic gone, conversion value increased by nearly 49%. That’s not an efficiency improvement — that’s a fundamentally different quality of traffic.

Target ROAS didn’t just bid smarter. It actively filtered. Google’s algorithm, armed with nine days of conversion data, identified the search queries and auction moments most likely to produce high-value conversions — and stopped wasting budget everywhere else. The result was a campaign running at a fraction of the traffic volume but generating substantially more revenue.

The CPC increase (+73.4%) deserves context too. Manual CPC was keeping per-click costs low, but low CPC is only valuable if those clicks convert. When you’re paying ₹0.64 for clicks that convert at 0.28%, you’re not saving money — you’re just failing cheaply. At ₹1.11 per click with a 1.07% conversion rate, every rupee is doing considerably more work.

Perhaps the most telling number: conversions held almost exactly flat (6.84 vs. 6.93) while cost dropped by more than half. The campaign maintained its conversion volume while spending 55% less. That’s the compound effect of better targeting and better bids working together.

The Verdict

The hypothesis was confirmed. Transitioning from Manual CPC to Target ROAS after establishing a baseline data period improved every meaningful performance metric — ROAS up 230%, cost per conversion down 55.5%, conversion value up 48.8% — all while maintaining the same conversion volume on less than half the original spend.

It’s worth being explicit about what made this work: the sequencing. Jumping straight to Target ROAS on day one, with no conversion history, would likely have produced very different results. Phase 1 wasn’t just data collection; it was investment in the algorithm’s ability to optimize in Phase 2. The two phases worked together, not in isolation.

What Comes Next for The Client

With ROAS at 22.38×, there’s significant room to scale budget while remaining highly profitable, the efficiency unlocked in Phase 2 creates headroom that didn’t exist before. The next steps are to test different ROAS targets to find the optimal balance between volume and efficiency, refine product-level segmentation to double down on the highest-value SKUs, and maintain strong conversion data hygiene as spend scales.

The broader takeaway for any brand running Google Shopping: bid strategy is not a set-and-forget decision, nor is it a binary choice between control and automation. The most effective approach treats them as phases in a deliberate sequence — use manual control to build the signal, then let automation use it.

For The Client, that sequence proved worth running.