4 Ways A/B Testing Might be Lying to You
Optimizing user experience and increasing conversions are two of the main drivers behind the changes made to every website. If you run a website, you want to make sure that everything is just right, so that visitors will reward you with their time and patronage.
This sounds like a tall task, but it’s well within your grasp thanks to A/B or split testing. Split testing takes the guesswork out of the equation by using real life data to pinpoint the changes that will have the most powerful impact on website performance.
Previously, we’ve covered some of the best tools to implement A/B tests for WordPress websites, but we didn’t get into what makes for a good A/B test. We’re going to fix that now by introducing you to the fundamental mistakes you need to avoid, and how to do things the right way.
Lie #1: Size Doesn’t Matter
You may have heard this one before, albeit in a different context, but let’s set the record straight. Sample size does matter. By sample size, we’re referring to the primary metric you’re using to measure your A/B tests, whether that is pageviews, user sessions, clicks, or something more complex.
To put it simply, results cannot be trusted unless your sample size is large enough. Imagine that you quizzed three people about the placement of a Call to Action (CTA) and two of them shared a preference for one location. Obviously, taking these results as a representation of a wider trend would be ludicrous. However, if you aren’t careful, it’s very easy to do basically the same thing and fall into the trap of ignoring statistical relevance.
To determine whether a sample size is statistically relevant, you need to consider three factors: overall sample size, total number of conversions, and the conversion rate. Furthermore, it goes without saying that impressions should be evenly distributed between your A and B test options.
Once you have the raw numbers, then you can perform some calculations to determine whether the results are statistically relevant. This might sound hard, but you can use an online tool such as Get Data Driven’s A/B Significance Test to crunch the numbers. Just plug the numbers in and read your results. The higher the rate of certainty, the more confidence you can place in your results.
Lie #2: You Don’t Need a Hypothesis
A lot of people love to tout the benefits of split testing, but few will tell you they are worthless unless you sit down and formulate a hypothesis. The foundation of any good A/B test is whether it proves or disproves a statement – it’s the scientific method in action.
Let’s use an example to illustrate the process. Imagine you design a landing page that isn’t converting as well as you expected, so you decide to experiment with different CTA placements. You hypothesize that a placing your main CTA after a particularly compelling section will boost your conversions and decide to test it. Version A would be your page as it was, with the CTA in its original position. Version B would employ the new placement. You let the test run for a while, accumulate a sizable (and relevant) sample size and then call it a day. The results show that your hypothesis was erroneous, your conversion rate decreased with the new placement.
Would you consider this test a failure? From a monetary perspective, yes, it would be. But, you did learn something new. Now you are ready to test different placements. If those don’t work then you can begin to expand your hypotheses – perhaps your low conversions aren’t due to poor placement, but bad copy. So you formulate a new statement and put it to the test.
Ideally, a single split test would tell us everything we need to know, but that’s not the way things work. A good hypothesis will get you closer to the truth one step at a time, by enabling you to figure out what works and what doesn’t.
Lie #3: Those Results Weren’t Good Enough
This fallacy builds upon some of the ideas we discussed in the last section. Unreasonable expectations will sometimes lead to discard useful data that fell short of your goals.
Consider the example we just used, but imagine that version B increased your conversions by a small margin – let’s say 2%. That’s a low figure, although websites with lots of traffic would kill for that kind of improvement.
Disregarding those results because they don’t meet some arbitrary goal could be a costly mistake. Plenty of people might do so if they are testing haphazardly though, but we know better. As long as your sample size was large enough and your hypothesis good, then you can take those results to the bank.
Before you can run, you need to learn how to walk, so don’t be discouraged!
Lie #4: You’re Ready to Start A/B Testing Today
If you’re anything like us, then you’re already pumped at the prospect of putting what you’ve learned so far into action. But before you run off to craft a hypothesis, there is one last factor you need to consider: is your site ready for A/B testing?
That depends on whether you have enough traffic and conversions to obtain reliable results. We already discussed the concept of sample sizes ad nauseum. So you understand, a successful test requires enough visitors. Setting an arbitrary number here would do more harm than good since no two sites are equal. As a rule of thumb, if it takes you more than a month to gather a statistically significant sample size, then you probably aren’t ready to start testing.
Right now you must be asking why. After all, numbers are numbers, right? The problem is that if we stretch tests for too long, we increase the odds of outside factors influencing them. Plus, it’s just not feasible to refine your hypotheses over time without enough visitors, unless you plan to invest decades into A/B testing your sites.
If you find yourself in this scenario, then perhaps your time would be better invested enhancing SEO, decreasing your bounce rate, and creating new content. These are things that should increase your traffic over time and enable you to execute successful A/B tests in the future.
Conclusion
A successful A/B test will provide you with the information you need, as long as you ask the right questions, and interpret the answers correctly. The best way to make sure that your A/B tests are pointing you in the right direction is to avoid the following mistakes:
- Not taking sample size into consideration when evaluating test results.
- Running tests randomly instead of formulating hypotheses.
- Having unreasonable expectations about your results.
- Testing before you have enough traffic and conversions to back your efforts.
Do you have any split testing success stories that you can share with us? Tell us about them in the comments section below!
Image credit: StockSnap.io