The Essential Best Practice Guide for A/B Testing Tool Success

Last updated By | 5 Comments

ab testing tool VWO screenshot

Newly updated in June 2016. Using an A/B testing tool to improve your website conversion rates and sales? If so, that’s a great start… but you may be gaining low conversion lifts without realizing why. But why?

There is a lack of A/B testing tool best practices for you to learn from. This is because A/B testing tool companies mainly focus their user guides on their tool’s setup, not so much on best practices for getting results. This means many online marketers often gain poor conversion rate lifts from tests, often without realizing they can do much better.

Therefore, to help correct this lack of best practices and improve the results you gain from your A/B testing tool, I have created a high impact guide (it’s long!) that outlines key steps you need to focus on in your testing tool. It’s perfect for beginners, yet also for people who have been testing for a while.

Pretesting: Planning your A/B tests

Did you know that the majority of things impacting chances of great A/B testing results actually happen BEFORE even logging in to your A/B testing tool? Yes, that’s right, I would go as far to say that over 75% of testing success comes from these 3 factors:

1: Create high impact test ideas. Don’t just pick random things to A/B test, or only test what you or your boss thinks is best. There are 6 main sources of higher impact A/B test ideas that you should make use of:

  • Web analytics tool insights to find highest potential pages to A/B test. For a great place to start, check out these best Google Analytics reports for gaining insights.
  • Website usability tool feedback using tools like Usertesting.com. This is essential for gaining visitor feedback and finding the most problematic pages on your website.
  • Visitor survey feedback using tools like Hotjar.com (page specific single question surveys) or SurveyMonkey.com (more general but longer indepth surveys).
  • Expert A/B test ideas. To get the highest quality A/B test ideas you should also get recommendations and ideas from CRO experts like myself and others.
  • Competitor website reviews. Regularly check what your competitor websites are improving or launching. But don’t just copy them – look to improve what they are doing.
  • Previous A/B test results. Don’t just throw away your old tests – you should learn from them to help shape what test test next, particularly your failed tests.

You also need to create a test hypothesis for each of your test ideas – this is the reason why you are wanting to test that particular element or page, and include an idea of why you think the test will have an impact.

2: Check you have enough traffic to the page you plan to test. If you don’t have adequate traffic, you will risk wasting time testing – you won’t even get a significant result, let alone a good result! To help you understand if you have enough traffic you can use calculators like this to see how many days are needed to run a test – and needing anymore than 30 days to get a result is too long. And if you don’t have enough traffic, check out these great alternative ways to test on our website.

AB test duration calculator

3: Prioritize your tests and try easier, high impact tests. It’s essential you prioritize your A/B test ideas to ensure you get the best results. This should be based on how easy they are to  implement on your website (in terms of development, design etc), level of traffic available, and the likely impact on conversion rates. Don’t just pick tests that have highest impact, because they may be very hard to test easily (like your checkout pages) – also pick ideas that are easy and quick to implement, yet still have a likely high impact like test ideas for your headlines or images (known as low hanging fruit).

Doing this test prioritization will help you get good test results quickly – vital for gaining further buy-in for running tests in the future (and if you pick something hard or risky to implement, it may not give great results and risk derailing your future test efforts).

Creating A/B tests in testing tools

Now you know how to get better results even before even using your tool,  let’s discuss the most important things you need to remember while creating your tests.

Please note this presumes you already have an A/B testing tool, and I recommend using either Visual Website Optimizer or Optimizely. To help you understand the differences of A/B testing tools like these, check out this A/B testing tool comparison guide I created.

4: Choose your type of website test. When you are creating a test, one of the first things you will be asked is what type of test you want to run. Here are the three main types:

  • A/B tests are the most common option and usually best option, and are great for testing variations of a single element like a button. However, if you are testing many things at once on a page, you will find it hard to understand what elements are contributing most to conversion rate success.
  • Multivariate tests (MVT) allow you to test many elements per page at once, but require considerably more traffic than an A/B test to run it (it has to test many more combinations of variations). It’s quite rare to have enough traffic to use this type of test are really better suited for advanced users.
  • Split page tests simple page redirect tests and are very similar to A/B tests. They are only really needed when the only way you can test a page is by creating and showing a different page (may occur sometimes if you have a particularly tough platform to make changes on easily).

types of website tests

5: Understand targeting in A/B tests to really improve conversion rates. You will also see an option to create a personalization test or add test targeting in many A/B testing tools. This isn’t just about showing different content to visitors in different countries or cities – it’s all about using the tool to show more relevant content to different groups of your visitors, like new visitors or repeat visitors (e.g. new user guides, repeat visitor discounts and affinity content). It’s also one of the best ways to push your conversion rates and sales much higher using your testing tool.

You can setup simple visitor groups to target in Visual Website Optimizer and Optimizely fairly easily, using visitor attributes like new visitor, or if they have seen a particular page or not. Go ahead and think of some high impact visitor groups on your website and try targeting specific test content for them.

targeting tests in VWO

6: Create great variations for the page or elements you want to test. If you are using Visual Website Optimizer or Optimizely, you can quite easily create your test variation with their visual editor to change, add or move elements (also with the code editor for more complex changes). If you have resources available to you, always seek help from web designers and the marketing team to help you create better, more engaging visual designs for your test variations.

VWO visual editor

Some best practices here are to make sure differences in your test variations are easily noticeable by visitors, or they won’t notice and covert any better than your current version. Also, don’t create too many variations (more than 4), because the more you create, the longer it takes the tool to gain test results (particularly if you are running an MVT).

7: Understand and set up appropriate goals to measure for each test. Next, it’s key that you set up relevant goals to measure success for each test. This depends on what you are testing, and will often involve more than one goal. If you are testing improving your checkout pages, the biggest goal will be order completion. If you are testing your homepage, you will want to set up goals like click-through rate, and depending what your main site goal is, you will want to add that too (for example generating sign ups).

The most important other thing here is to always try to set up revenue as a goal for each test. This will help you prove test impact on online revenue, not just on conversion rates (your boss and senior executives will care more about revenue!) and this is also essential for helping you show ROI of the tool, vital if you are to be able to gain more budget for future testing.  This can be set up fairly easily as goals in Visual Website Optimizer and Optimizely (by adding revenue tracking code), and even in Google Content Experiments (one of its redeeming qualities). Are you doing this for every test?

8: Add notes in the tool to explain your A/B hypothesis and insight. Many overlook this part of creating tests in the tool – if you use the ‘notes’ feature of your tool for each test it will help you document and understand your reasons for each test idea (test hypothesis and insights that helped form it), and help when analyzing test results in the future. In this notes section you should also add observations about the test result, and possible ideas for future tests based on the result (as we will discuss in step #14, the final step).

9: Preview the A/B test you want to launch and perform QA. Next you need to preview the test on your website to make sure everything looks fine for each of the test variations, and nothing is broken. This is known as quality assurance (QA). You should ideally perform this QA on a development version of your website and not your live site – you don’t want your site potentially breaking while people are using it. Don’t forget to check your test in multiple browsers too as this can impact how your variations look, and most tools have built-in tools to help you do this.

And if this is your first test on your website, the tool will remind you at this point to add the A/B testing code snippets on your site and will check if they are present.

10: Launch the A/B test. Fairly self explanatory – the last step of creating a test in an A/B testing tool will be hitting the ‘launch’ test button. As soon as you have done this, I suggest you double check the test variations are working as expected, and also that results are beginning to show up in the tool.

Analyzing A/B test results in testing tools

Now your test is up and running, it’s important to know how to analyze test results. If you don’t do it correctly, you will risk launching a winning version that actually isn’t the best performing one.

11: Understand what confidence means, respect it – but don’t let it cause paralysis. Testing tools run constant analysis of incoming results to find test winners and declare winners (and losers) by using statistical significance models. This significance result (called ‘chance to beat’ in VWO) is then shown next to each test result in your reports, usually as a percentage. In layman’s terms, it basically means the higher the percentage confidence, the more likely that if you ran this test again, the same result would occur.

I won’t bore you with how its calculated, but it uses models you may have learned in a statistics class. Ideally you need at least 85% confidence to be sure the tool has found a statistically significant winning version, but don’t let it paralyze you from choosing a winner if you see an amazing lift variation winning, but with only 81% confidence.

vwo reports

12: Gain at least a week’s worth of results before declaring a winner. This is a very common mistake testing tool users often make, with negative consequences. Even if the tool shows high confidence for a winner in a few days, you need at least 7 days worth of results to take into account differences in traffic for days of week and for you to notice fluctuations in winning variations to level out in the results graph (often you will see one variation start off winning, but eventually dip and another version start winning, as you can see the graph below).

Never declare a winner sooner, or your will risk launching a version that isn’t actually the best improvement on conversion rates, or may even launch a test that actually negatively impacts conversion rates in the following weeks and lowers sales or revenue.

13: Know how to interpret conversion rate results. This is another key thing to understand when analyzing test results. You may be expecting to see 50% or 100% increases in conversion rates for success, but this is not likely. However, even much lower increases can be considered successful, particularly when you equate that lift into additional revenue. Even a seemingly small 2% uplift can have a big impact on revenue! Ideally you should always track revenue uplift too, as mentioned earlier.

The average conversion rate increase for an A/B test is about 8%, with more experienced A/B testers and CRO experts often able to get this much higher for some results. Anything over 2% increase is considered reasonable, above 5% is good, and if you get over 20% that is very good. Getting over 50% is excellent, but its quite hard to get increases like that.

And remember, it’s not only about the percentage conversion rate lift – you need to check the actual numeric increase in conversion rate. For example, a 50% increase actually means little if it’s the difference between a 0.2 and 0.3% conversion rate increase.

After the A/B test has ended

And lastly, don’t forget to always do this last thing after your test has ended, as it feeds back into step 1.

14: Learn from your test results to create better follow up test ideas. Don’t just test something and then move on to the next test idea – to gain higher conversion rates you need to run follow up tests. And quite often tests won’t generate the results you expect or will won’t get a winning result (a study from VWO even revealed that 1 out 7 tests don’t win).

Therefore you should find possible reasons why the test didn’t work as planned, and then create a follow up test using different test elements or variations. This is known as test iteration and is the secret sauce of good testing agencies and companies with effective testing teams.  For example, don’t just test the number of fields in your email opt-in box, test the location of it, test the header text and test the button relating to it. I created a detailed article about how to learn more from A/B tests that don’t win, so read that too.

Wrapping up

So there we have it. The essential guide for A/B testing tool success. If you have found this useful, please share this with others, particularly anyone that helps run tests in your organization (web analysts, designers, developers, project managers, online marketers etc) so you can all get on the same wavelength and start generating better results in your testing tool!

Now over to you – which of these testing tool best practices have you got greatest results from? Or maybe you have a few of your own you want to share… please comment below.

Like this article? Please share it using these icons below... thanks!

41
SHARES
FREE CONVERSION TOOLBOX TO BOOST YOUR WEBSITE SALES
Get free access to my 7 expert guides and discounts to boost your website sales or leads
Also includes blog updates and we won't spam you!
  • Todd Barrs

    Solid testing overview, Rich! Having used all of the testing tools you mentioned, I would argue that Optimizely does have the best product out there from a feature, ease of use and reliability perspective. It’s definitely not the cheapest tool if you want all the features, but the amount of time saved in setting up reliable tests is well worth the expense in my opinion.

    “Optimizers” need to think about the opportunity costs associated with test setup time when weighing out cheaper options.

    Please note that I have no affiliation with Optimizely other than I use it for testing.

  • http://www.clickpencil.com/ landing page design

    The post is written in very a good manner and it entails many useful information for me.Landing
    Page Design

  • Clive Hitchcock

    Very informative post. Thank you for sharing these Rich!
    http://clive-hitchcock.com/
    http://bigideamastermind.com/bimstorm/?id=1kpaydays

  • Moumita Khan

    It is a very helpful article.
    http://internetspeed24.blogspot.com/

  • Mike Nichols

    Very handy indeed Rich – didn’t realize there was so much more to using those testing tools. Thanks