Have you conducted any A/B tests for your website? If so how were the results? Were you able to improve conversions after implementing the changes? Conversion rate optimization (CRO) lets you achieve something quite wonderful: it helps you get more out of your existing traffic. A/B testing is the secret sauce that makes that happen.
1. Begin A/B testing without any assumptions at all
Don’t assume anything about your audience when you conduct an A/B test. Let’s say it is a CTA (call to action) button you’re testing. You may expect conversions to improve if the buttons are bigger. Or you may believe yellow converts best. But what if the results show otherwise? Start your tests with a hypothesis, but don’t presume to know what the results will be. More times than not, they aren’t what you expect.
2. Do a qualitative analysis to understand what your audience needs.
Quantitative analysis is when you ask your audience direct questions about your website/business, and it’s a fantastic way to get detailed information for your testing hypotheses. Let’s say you are selling a dog training course/eBook. You have a great sales page, plenty of testimonials, excellent design and a bonus eBooks to go along with the course. However, the conversions are dismal; on some days, you don’t even have any conversions. What can you do?
You send an online survey and ask your visitors for feedback. Create a spot in the sidebar or on a slide-in that asks people, “What are the problems you are facing with this site? The answer you get may actually be the solution to your conversion problem. For instance, visitors may tell you that your site loads too slowly. Armed with that knowledge, you can fix your site speed, and see your conversions double.But you can use this tactic to fix your landing pages too. For that, you might ask, “What’s the number 1 problem you face with your dog?” You likely receive many answers, but the most common problem your visitor’s report is that their dog doesn’t listen to them.
You can now frame your sales page in a manner that resonates with this demand. You can also set up an opt-in pop-up that pre-sells them the same idea with a free eBook. Best of all, you know exactly how to test your sales page: one without the opt-in pop-up and one with the pop-up. With targeted email subscribers, now your conversions reach the sky. The secret is to allow users to give you direct feedback. Then apply that information to smarter A/B tests that truly impact your conversion rate.
3. Make sure you reach statistical significance
Statistical significance refers to “the low probability of obtaining at least as extreme results given that the null hypothesis is true.” In ordinary language, it means you’re sure the results of your test were reliable. Statistical confidence is the likelihood of the same results being repeated. We talk about statistical significance in A/B tests because of chance.There are other factors to consider as well. If the sample size is too small, then you can’t be confident in being able to reproduce the results. A sample size of 10 to 100 people is generally considered low. In the example below, two versions of the landing page were used.Version A: Upload button bold; convert button bold; convert button has a right arrow. Version B: All buttons regular weight; no right arrow on convert button. But the sample sizes were too small; only 128 users in version A and 108 in version B. In fact, we can see that the version seems to have made the page more usable with CTAs in bold.
4. Role of chance:
It’s always possible that the conversions improved not because of the changes you made, but because of the visitor’s mood, the time of year, or maybe something else. Statistical significance doesn’t mean practical significance: Just because a test is statistically significant doesn’t mean that it is practical. As you increase the sample size, you may notice small differences in conversions in the order of 1 to 2%. However, for most websites, these small changes mean nothing. In such cases, the costs of obtaining those results might not even be worth the limited improvement.
5. Do Not Stop Early
You should not stop a test early, even if it appears that one version of the best is winning. Until you reach the predetermined sample size you set for the test, there’s a possibility that the element of chance may be at play.Say you are planning on running an A/B test for one full week to meet a sample size of 10,000. What if after two or three days you see conversion rates of 4% on one version and 5% on another? You should keep going until the sample size is met (and possibly even beyond).
6. Test Multiple Variables
Although A/B testing traditionally tests just one element at a time, there’s a lot more you can do with multivariate testing. Once you are done testing headlines and CTA buttons alone, test combinations of these variables.