Ask any business at the top of its sector how it got there, and the answer will invariably involve a lot of hard work, and a lot of trial and error.
Trial and error is an invaluable tool: yes, nobody wants to make mistakes. But if you don’t make mistakes, you can’t learn or improve. It’s inevitable that, within any business, some strategies will work better than others.
And that’s exactly what A/B testing is for. A/B testing, or split testing, gives marketers the chance to experiment with different messages to see which works best.
It sounds like a very simple premise: in fact, it’s something which should form the basics of most marketing strategies.
Despite this, 60% of marketers do not use A/B testing*. That’s more than half the industry. Marketers who don’t use A/B testing can’t do the best job for their business or clients because they only know half the story.
For online marketing, split testing can potentially increase traffic by up to 40%.
Yet it still seems to fall by the wayside. With only 40% of marketers trying different strategies to find the best approach, one does wonder how the other 60% justify their paychecks.
If it’s a question of lacking knowledge, then that is easily fixed. If time constraints are a problem, then perhaps better time management, or prioritisation, is needed.
The bottom line, though, is that if you haven’t used A/B testing on any of your campaigns, you’re selling yourself short. And no marketer wants to do that.
The basic premise is simple, and obvious. A/B testing is carried out to discover the best performance from two variables. A very simple example would be to create two landing pages, with the same URL and call to action, but with something different about them.
This could be the colour scheme, font, layout, different fields on a form or even just a button placed differently. Click-through traffic is divided equally between the two variations and the one with the most conversions is considered the better option of the two.
The more variables that are tested the better, A/B testing suggests a dual-faceted approach, but really, the more information you can gather about a page’s performance, the more you can change to maximise its potential.
Of course, testing should only be carried out on one specific variable at a time to be accurate and most importantly, measurable. Once you establish the difference in one variation, check the next.
Not all results are going to be decisive or have immense importance.
A change of colour scheme probably won’t have the same effect as an improved call to action.
But the decision on whether or not changes should be made as a result of a test phase will become easier the more tests and analyses are completed.
Eventually, it is possible to have simultaneous testing on more than two different variables simultaneously (which, handily, is known as A/B/C testing), or to make the two test variables much more complex.
All Internet marketing channels should be the subject of A/B testing - including blog pages, e-mail campaigns, social media profiles and paid search ads, but only one variable of each should be tested at any one time, to avoid confusion between results.
Keeping records of the steps that have been followed in the testing process is all-important.
At each stage, a copy of the original A, or control, should always be kept; as should each treatment of element B, so that results can be analysed correctly.
A/B testing is an essential part of any marketing strategy. Doing one-off tests may yield short-term results, but to really succeed, testing has to be a continuous process.
For some companies, finding the time and resource internally to carry out thorough, meaningful A/B testing may be unfeasible. But then, that’s where Internet marketing experts like ClickThrough Marketing come in.
* According to Marketing Sherpa's 2011 'Landing Page Benchmark Survey'.