When in doubt, turn to data. A/B Testing is the core of internet marketing, and the data collected will never lie in A/B Testing. The rule of thumb for A/B Testing is, “keep it simple.” I’ve listened to speakers talk about their A/B Testing methods; throw everything in and, boom, you’ve got data. Although that wouldseem like a great idea, you might find yourself losing cash fast.
A/B testing can be used in PPC, SEO, and social media plate forms . Regardless of the youtube channel you are using, their ultimate goals are the same – improve conversions and increase your ROI. In order to achieve that, you need to do some basic A/B Testing
Why You Should Keep It Simple
When you’ve got too many things going on at once, it’s easy to lose track of them. Separating A/B Testing based on one variable, while keeping all else constant for A/B Testing, makes it easier to compare. If all other factors are held constant in A/B Testing, then you know for sure that your test results are affected by the one dependent variable. Having too many variables means that you won’t be able to attribute the success to one variable.
What Should You Be Measuring?
What you measure depends on the variable and goal of the campaign. If you’re testing out new ad copy (the variable) then you should be measuring the click-through-rate. If you’re testing the color of the submit button on the landing page, then it should be based on conversions.
How to Tell Which Test Is the Winner
This isn’t a race. It’s a competition. A/B test should be treated like a competition between two players. You set the rules and, depending on the goals of the match, you decide who the winner is. Some things that need to be considered when comparing metrics include:
This can be tricky. How much data do you need to be able to determine a winner? A small sample of data means you don’t have enough to compare; however, there are exceptions. Some campaigns don’t have a lot of search volume, which means there is, inherently, little data to compare. In an ideal world, we want to aim for 100 conversions before deciding a winner. On campaigns with little data, you can run them for a longer period of time to try to achieve a greater statistical significance. Even so, some campaigns could take up to 6 months before reaching 100 conversions. What do you do then? Keep your focus on one variable, make a radical change and run the test.
How long should you run a test for? Ideally, you should allow your testing campaigns to run for two to four weeks to collect data. Obviously, the more time you have to run your tests, the more data you’ll have to compare. There is no such thing as too much data (again, as long as you know how to read it correctly). When it becomes clear that Test A is performing much better than Test B, you then have a winner right? Well, not exactly. A 75% win still means that you’re losing 25% of your clicks or conversions. A good confidence level stands at 95%.
Before launching a test, make sure you do your homework. There are numerous factors to consider that may affect the results of your tests, and could lead to invalid data. Common mistakes can be costly mistakes. Tests are a means to improve performance. Here are a few of the most common mistakes, and what to do to avoid them.
The number one most common mistake is lack of research. The more research that is done before testing, the more qualitative your results will be. If you don’t spend enough time, you’ll probably end up wasting precious resources and brainpower on figuring out all the unknowns.
For example, running a test during holiday seasons could affect your data. It’s the time of year where there are measurably more online purchases and conversions. If you’re not careful in considering factors, such as seasonality, that can alter the results of your tests, leaving you with questionable data.
Too Many Variables.
When it comes to variables, less is more. So, you’ve decided that you’re going to run a one-variable test on one campaign, but you’re also going to run a different one-variable test on another campaign, and yet another test on the landing page. Like external factors, this may cause a discrepancy in your data. Although this may seem efficient, this one-variable test becomes a multi-variable test, which can be harder to conclusively analyze.
A/B testing can be extremely valuable if you’re careful. There are endless possibilities of tests you can run, but the tools won’t do the thinking for you. Keeping your tests simple will keep you efficient, and keep the data valid.