Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW 

Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW  Image Description  WE ARE ALL NEW 

Negative Keyword Shopping Campaign Experiment 

We Added Negative Keywords to the Client’s Shopping Campaign for 10 Days. Conversions More Than Doubled.

Most Google Shopping campaigns don’t have a bidding problem. They have a targeting problem. They’re showing up for the wrong searches, paying for clicks that were never going to convert, and wondering why the ROAS isn’t moving.

That was the hypothesis we walked into with the Client’s Shopping campaign. And a straightforward negative keyword experiment, run over 20 days across two clean phases, gave us a clear answer.

The Problem With “More Traffic”

When a Shopping campaign underperforms, the instinct is often to increase bids or broaden targeting to capture more volume. But more traffic from the wrong queries isn’t an opportunity, it’s a drain. Every irrelevant click costs money, crowds the data, and inflates CPA without adding a single conversion.

The smarter fix is subtraction. Identify what’s eating your budget without contributing to revenue, cut it, and watch the remaining traffic perform better. Negative keywords are one of the oldest tools in paid search, but they’re consistently underused in Shopping campaigns where search term visibility is lower, and query matching is less predictable.

Research backs this up: BrightBid’s 2025 analysis found that adding negatives after a search term audit improved conversion rates by around 20%. DataFeedWatch reported an 18% ROAS lift from regular negative keyword updates. Admetrics.io’s live campaign audits showed CPA dropping roughly 15% after negative keyword implementation. The pattern is consistent enough that the question isn’t whether negatives help, it’s how much and how quickly.

For the client, we decided to find out precisely.

How the Experiment Was Structured

The experiment ran in two equal phases, ten days each, with the same daily budget throughout.

Phase 1 — Baseline (July 20–29): The campaign ran without interference. No optimisations, no adjustments. The sole purpose was to collect clean search term data and establish a performance baseline to measure against.

Phase 2 — Test (July 30–August 8): We pulled the search term report from Phase 1, identified irrelevant and non-converting queries, and added them as negative keywords. Everything else stayed the same budget, same products, same targeting. Only the negative keyword list changed.

The isolation was intentional. If results improved, we needed to be confident it was the negatives driving it, not a budget change or a new product feed.

The Results

MetricBaseline (Jul 20–29)Test (Jul 30–Aug 8)Change
Clicks3,6162,357−34.8%
Impressions1,31,9571,33,715+1.3%
CTR2.74%1.76%−35.8%
Avg. CPC₹1.55₹3.00+93.5%
Total Cost₹5,614.38₹7,067.02+25.8%
Conversion Rate0.19%0.72%+278.9% ✅
Conversions717+142.9% ✅
Conversion Value₹15,755₹21,919.89+39.1% ✅
ROAS2.81×3.10×+10.3% ✅
Cost per Conversion₹802.05₹415.05−48.2% ✅

Reading the Numbers Honestly

The traffic drop will be the first thing a skeptic points to 34.8% fewer clicks, CTR down by more than a third. On the surface, that sounds like the campaign lost ground. It didn’t.

What actually happened is that the campaign stopped paying for clicks it had no business paying for. Impressions held almost flat (+1.3%), which tells you the campaign’s reach wasn’t meaningfully reduced it was still entering auctions at roughly the same frequency. The difference is in which auctions. By excluding irrelevant search terms, the ads were now showing against queries with actual purchase intent, rather than a broad mix of relevant and irrelevant traffic.

The conversion rate jump tells the rest of the story. Going from 0.19% to 0.72%,  a 279% increase, with less traffic, is not a coincidence. That’s the direct signal of better-qualified clicks. The people reaching the product page in Phase 2 were more likely to buy because they arrived via more relevant searches.

Conversions more than doubled (7 to 17) on fewer clicks. And the cost to acquire each conversion dropped by nearly half, from ₹802 to ₹415. The campaign spent more in absolute terms (+25.8%), but it generated significantly more conversion value (+39.1%) in return. That’s a trade worth making.

The CPC increase (+93.5%) is the one number that warrants watching. Higher average CPC suggests the campaign is now competing more aggressively in auctions where purchase intent is higher,  which is expected and desirable,  but it also means cost management will be important as the budget scales. Efficiency improved dramatically; the goal now is to make sure that efficiency holds as volume grows.

Why This Works

The mechanism behind the results is straightforward, even if the impact is larger than many expect.

Shopping campaigns pull from broad search term matching. Google matches product listings to queries based on product feed data, and the match isn’t always precise. A campaign selling a specific product category will inevitably appear for adjacent, tangential, or entirely unrelated searches, especially early in a campaign’s life before the algorithm has enough conversion signal to self-optimise.

In Phase 1, the budget was distributed across all of those queries indiscriminately. Converting and non-converting traffic received equal spend. By Phase 2, the budget was concentrated on queries that had already demonstrated relevance. The algorithm had the same speed to work with, but was now operating in a cleaner signal environment. The result was better conversion productivity per rupee spent.

It’s also worth noting what didn’t change: the product feed, the bids, the targeting, the budget. 

The efficiency gains came entirely from removing bad traffic, not from adding anything new.

The Verdict

The hypothesis held. Excluding irrelevant search terms after a 10-day baseline period improved conversion rate by 279%, doubled conversion volume, lifted conversion value by 39%, and cut cost per conversion nearly in half, all within a 10-day test window.

For a campaign that was converting at under 0.2% with a CPA of ₹802, those aren’t incremental gains. They represent a meaningfully different campaign performance profile — one built on qualified traffic rather than raw volume.

What Comes Next for Client

The experiment confirmed the approach works. The next phase of work is about building on it without losing what was gained.

The click volume drop needs investigation — understanding whether the lost traffic was purely irrelevant, or whether some relevant queries were caught in the negative net, will inform how aggressively to expand the list going forward. Bidding strategy should also be reviewed in light of the CPC increase; the efficiency is strong enough to justify higher CPCs for now, but that relationship needs active monitoring as spend scales.

Long-term, the negative keyword list should be treated as a living document. Search term patterns shift, seasonal queries change, and new irrelevant terms will surface over time. Regular search term audits — ideally every two to four weeks — will keep the targeting clean and prevent the inefficiency creep that makes this kind of experiment necessary in the first place.

The broader lesson applies beyond Client: in Shopping campaigns, what you exclude often matters as much as what you target. Tighter traffic, when the traffic is right, consistently outperforms more traffic that isn’t.