A/B testing (split testing) in Google Ads is the practice of running two versions of an element — an ad, a landing page, a bid strategy — simultaneously to measure which performs better. It is the systematic alternative to making changes based on intuition and hoping for improvement. For accounting firms investing regularly in Google Ads, testing produces compounding performance gains over time that cannot be achieved through set-and-forget management.
What to test and in what order
Not everything is worth testing, and not everything can be tested simultaneously. Testing priority should follow the order of impact:
- Landing page (highest impact). Your landing page conversion rate determines how many of your clicks become enquiries. A landing page improvement from three percent to six percent doubles your enquiries from the same budget. Test headline, CTA button text, form length, and page layout.
- Ad copy (high impact). Your ad copy determines CTR and Quality Score. Test headline variations — keyword-led vs benefit-led, specific claims vs general statements, different CTAs.
- Bidding strategy (medium impact). Once your campaign has sufficient conversion history (30+ per month), test Manual CPC against Target CPA to see which produces better cost per conversion.
- Audience targeting (medium impact, advanced). Test performance across different audience overlays or geographic sub-segments.
Google Ads' built-in testing tools
Responsive Search Ads (RSA) asset testing: when you write an RSA with multiple headline and description variants, Google automatically tests combinations. The asset report shows performance per headline (Best, Good, Poor, Learning). This is passive testing — you provide variants and Google rotates them.
Experiments (A/B tests): Google Ads has a formal Experiments feature (under Campaigns > Experiments) for structured A/B tests. You can test:
- One campaign against a modified version (Campaign Experiment).
- Bidding strategy changes (Bid Strategy Experiment).
- Ad copy changes at the ad group level.
Campaign Experiments split traffic between the original and the test variant. Both run simultaneously on the same budget split. After a defined period, you compare conversion rates, CPAs, and CTRs. The winner is kept; the loser is paused.
Running an ad copy test
- In your ad group, create a second Responsive Search Ad with different headlines from your control ad. Make the variation deliberately different — if your control ad has a benefit-led headline ("Fixed Monthly Fees — No Surprises"), test a question headline ("Looking for a New Accountant in [City]?").
- Set the ad rotation to "Rotate indefinitely" (under campaign settings) so both ads get roughly equal traffic rather than Google preferring the expected winner before you have data.
- Run both ads for four to six weeks. You need at least 200 to 300 impressions per ad — and ideally 20+ conversions across both — before drawing conclusions.
- Compare CTR and conversion rate between the two ads. If one is materially better on both metrics, pause the underperformer. If one has a higher CTR but lower conversion rate, consider whether CTR or conversion matters more for your current goal.
- Create a new challenger ad against the current winner. Continuous testing means steady incremental improvement.
Running a landing page test
Landing page testing is more complex and requires either a dedicated testing tool or the URL-based method described below.
The simpler method for most accounting firms is to use Google Ads' URL options to test two versions of a landing page:
- Create two versions of the landing page at different URLs (e.g. /accountant-manchester/ and /accountant-manchester-v2/).
- In Google Ads, create two ad groups targeting the same keywords, each linking to a different landing page.
- Split your budget equally between the two ad groups.
- After four to eight weeks, compare conversion rates between the two landing pages.
This is not a perfect controlled test (ad group-level differences can introduce variables), but it gives directional evidence about which landing page converts better without sophisticated testing software.
What sample sizes you need
Statistical significance requires enough data to be confident the result is not random. As a practical guide for most accounting firm accounts:
- Ad copy testing: minimum 200 clicks per variant, ideally 500+, before drawing conclusions.
- Landing page testing: minimum 100 conversions across both variants for reliable results.
With low-volume campaigns (under 200 clicks per month), meaningful A/B tests take longer to run. For very small accounts, focus on qualitative improvements (user feedback, reviewing session recordings) rather than statistical testing.
Testing cadence
- Review RSA asset performance monthly.
- Run formal ad copy A/B tests quarterly (one test per ad group at a time).
- Landing page tests can run longer — six to twelve weeks if traffic is moderate.
Do not change campaigns mid-test. Once a test is running, leave both versions unchanged until you have sufficient data. Making edits during a test invalidates the comparison.
Key takeaways
- Test landing pages first (highest impact), then ad copy, then bidding strategies.
- Use RSA asset reports for passive, ongoing headline and description testing within an ad.
- Use Google Ads Experiments for formal A/B tests between campaign variants or bidding strategies.
- Test one variable at a time with sufficient sample size before drawing conclusions (minimum 200 clicks per variant for ad copy).
- Continuous testing produces compounding improvements that set-and-forget management cannot achieve.
Frequently asked questions
How do we know if a test result is statistically significant?
Use a simple significance calculator (several are available free online — search "A/B test significance calculator"). Input your impressions, clicks, and conversions for each variant. A result is typically considered significant at 95% confidence. Below this threshold, the difference may be random rather than real.
Can we test more than one thing at a time?
You can run multiple tests simultaneously if they are in separate ad groups or campaigns. Testing two things within the same ad group simultaneously (e.g. a new headline and a new landing page) makes it impossible to determine which change caused the performance difference. Test one variable per test.
What is an ad rotation setting and why does it matter for testing?
Ad rotation determines how Google distributes impressions between multiple ads in an ad group. The default "Optimise" setting shows the predicted best-performing ad more often, which can starve a challenger ad of impressions before it has been properly tested. For testing, set rotation to "Rotate indefinitely" to give both variants roughly equal traffic.
Should we test ad copy on small ad groups with limited traffic?
Ad groups with fewer than 50 clicks per week will take months to accumulate enough data for reliable conclusions. For very small ad groups, focus on qualitative best practice (keyword match in headline, specific claim vs generic) rather than formal statistical testing. Apply learnings from higher-traffic ad groups to smaller ones.
How do we implement Google Ads Experiments?
In Google Ads, go to Campaigns, then Experiments in the left-hand menu. Click the + button to create a new experiment. Select the campaign you want to test, create a modified draft version, set the traffic split (typically 50/50), set the test duration, and launch. Results are visible in the Experiments section once the test has run for a sufficient period.