Creating a clear, agreed-upon hypothesis is the most basic — and most important — part of any A/B test. So why is this crucial step so often completely forgotten, even ignored?
Prove it all night
It’s really this simple: if you don’t have a real hypothesis, you don’t have a real test. That’s why you need to state exactly what you’re trying to prove with the outcome of your test.
Without a hypothesis, you leave yourself open to doing more work AND to getting fewer results you can actually learn from. That’s because if your test isn’t focused on one thing, it can be spun out of control to encompass everything your stakeholders can think of. That means more test recipes, more late nights, and more headaches once you and your stakeholders realize you haven’t reached any conclusions you can actually use.
A strong hypothesis is good to find
We’ve come up with a “template” for creating solid hypotheses:
By ________ we can achieve _________ etc.
A useful hypothesis includes the action you want to take (the first blank) followed by what the result will be (the second blank). Some examples, good and not-so-good:
A good one: By providing a contextually relevant post-sign-in experience for customers, we can increase both average sales price and conversion.
Another good one: By bringing the calls to action and the security badges higher on the page, we can boost user conversion.
A bad (and ugly) one: By making the design better, we can increase conversion by 27%.
What makes the bad one so ugly? The first part is too subjective — ask ten people people what “better design” is and you’ll get at least ten answers. The second part includes a goal that’s actually too specific. If you raise conversion by only 26%, you should be popping Champagne corks, not lamenting a “fail.”
Stick to the path
Think of your hypothesis as a navigation tool, like a compass keeping you on a straight road.
Well-intentioned people will try to lure you off the road by saying things like, “Can we add _______ to the test?” When that happens, ask yourself and your stakeholders, “Does that element you want to add to the page fit with the hypothesis?”
If the answer is no, be strong, and reply politely: “That doesn’t fit with our hypothesis. But that’s a great idea for another test.”
You may also hear, “We only have so much traffic; can’t we test a bunch of hypotheses all at once?” Keep in mind that putting everything into a test is a great way to learn nothing from it.
Even worse, your team may try to test one hypothesis per recipe in a single A/B test. I won’t go into why this is SO wrong, but I will tell you: you’ll waste a ton of effort and end up with nothing of value. (Yes, I have mental scars from situations like this.).
The moral of the story
First, create a strong, testable hypothesis — preferably in conjunction with your stakeholders. It’s always better to get their input up front — that way they’re just as “invested” in the hypothesis as you are.
Second, get agreement. Make sure everyone’s aboard with the hypothesis you’re testing before you do anything else. When you’re all looking at the same “compass,” you’re much more likely to get to your destination: test results that, win or lose, you can learn from — and take action on — to build your business.
Ideas by James Young. Structure by Chuck Vadun.