A/B testing (marketing)
ay-bee TEST-ing
Comparing two versions of a marketing asset to see which performs better. The scientific method applied to marketing decisions.
A/B testing is running two versions of something (an email subject line, a landing page headline, an ad creative) and measuring which one performs better. Version A goes to half the audience. Version B goes to the other half. The version with better results wins.
A/B testing removes opinion from marketing decisions. Instead of arguing about whether headline A or headline B is better, you test both and let the data decide. The most impactful things to test: email subject lines, landing page headlines, CTA copy, ad creative, and pricing page layout.
The key requirement is statistical significance. If 100 people see version A and 102 see version B, the results are noise, not signal. You need enough volume to draw reliable conclusions. For most B2B tests, that means hundreds to thousands of exposures per variant.
Examples
A/B testing a landing page headline.
Version A: 'Deploy Faster with Zero Downtime.' Version B: 'Your Deploys Are Too Slow. Fix Them.' 5,000 visitors per variant. Version A conversion rate: 3.2%. Version B: 4.7%. The provocative headline wins by 47%.
A/B testing email subject lines.
Version A: 'New feature: AI-powered code review.' Version B: 'Your pull requests just got 50% faster.' Version B open rate: 38% vs Version A: 24%. The benefit-focused subject line outperforms the feature-focused one.
A/B testing without enough volume.
The team A/B tests two ad versions with 50 clicks each. Version A: 3 conversions. Version B: 5 conversions. They declare Version B the winner. But with such small sample sizes, the difference is random noise. They need at least 500 clicks per version for a reliable result.
In practice
Read more on the blog
Frequently asked questions
What should you A/B test first?
Start with the highest-impact, highest-traffic assets: homepage headline, demo request landing page, email subject lines, and primary ad creative. These have enough volume to reach statistical significance quickly and the highest potential to improve pipeline.
How long should an A/B test run?
Until you reach statistical significance (typically 95% confidence). For high-traffic pages, that might be a few days. For lower-traffic pages, it might be weeks. Do not stop a test early because one version 'looks' better. Run it until the math confirms the winner.
Related terms
The percentage of people who take a desired action. Visitors who sign up. Leads who become customers. The measure of how well each stage of the funnel works.
The percentage of people who click your ad or link after seeing it. Clicks divided by impressions. Measures how compelling your message is.
A standalone web page designed for a single conversion goal. No navigation distractions. One page, one purpose, one CTA.
The marketing function that creates awareness and interest in your product. Fills the top and middle of the funnel with qualified prospects.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.