Chapter 2.1: What exactly is A/B testing, anyway?
Before we jump headlong into the wild, wonderful world of A/B testing, let’s go over some of the basics. (Even if you’re already a savvy A/B testing veteran, a little review never hurts.)
Wikipedia defines it like this: “A/B testing [is] a term commonly used in web development, online marketing, and other forms of advertising … two versions (A and B) are compared, which are identical except for one variation that might impact a user’s behavior.”
Adobe, the maker of one of the most expensive pieces of A/B testing software, says: “[A/B testing is] a numbers game. The more ideas you test, the more precisely you know what works on your site and what doesn’t.”
At Tangible UX, we think of A/B testing as much more than just a tactic or tool. Rather, using A/B testing means nothing less than embracing a philosophy and culture that celebrates the wins and learns from the losses.
Start by taking control
We’ll begin with your current application or website and refer to it as the control version. It gets the name from the “control group” in scientific experimentation. You’ll also hear it referred to as the baseline. In any case, it’s the version we test every other version against to see if we can do better.
Then, turn your office into a “test kitchen”
In the pre-Web days of direct marketing these additional test versions were called “cells.” These days we refer to them as “recipes” (a lot less clinical and a lot more appetizing). You can have as many test recipes as you like, but you’ll need to consider the amount of web traffic your site receives. More about that in a bit.
Let’s say you’re going to test two recipes against your control. You’d refer to them as:
- Recipe A: Your control, your baseline, what you have now<
- Recipe B: A version that you want to test against recipe A
- Recipe C: Another version, different from recipe B, that you want to test against recipe A, and so on.
Shown here is a concept for a three-recipe test. Which fuzzy friend will drive the most conversion? (Note that we’re only changing one variable in each recipe.)
Play “traffic cop”
Once you’ve decided what recipes to test, you start running web traffic through it. Visitors to your site see recipe A, recipe B, or recipe C. The one that converts the most visitors into buyers is your winner.
Think of your traffic (your website visitors) as moving through a big pipe (a mixed metaphor, yes, but bear with me). You take a small percentage of your traffic and run it through each of your test recipes. It’s like funneling your traffic through smaller pipes and then watching what happens.
Of all the traffic that flows through your site, you can divert a small amount of it into your test recipe(s).
Declare a winner
Recipe A (control) is always said to have “100% conversion” or a 100 index. So, let’s say recipe B came in at 85% conversion, or an 85 index — that’s a negative 15% conversion and it doesn’t beat your control. Now, if recipe C comes in at a 117 index, that’s 117% conversion, an increase of 17% over your control … which makes it your sure winner.
Live web traffic gives you a definitive answer: the small but mighty hamster is the clear winner and will gain you more conversion. The proof is in the small but mighty choices (and clicks) your actual customers made.
More to come
This is A/B testing at its most basic level, and it’s a great place to start. Once you’ve gotten your feet wet, you’ll likely want to try more complex testing methods, like multivariate and “lean” testing. More about those methods in future installments!
Ideas by James Young. Structure by Chuck Vadun.