Irritating, isn’t it?

You hear everyone and their mother praising A/B testing. You read about how it helped them reduce bounce rate, increase conversions, the conversion value, and bring more sales.

And yet…

Every split test you launch falls flat on its face.

Not a single visitor more notices the new Buy Now button. Your signup form still generates below zero conversion rate. And users bounce from the product page like as if nothing’s changed on it.

Hell, and even if the conversion did go up, you know it’s because of the time of the year, not of what you tested!

However, in spite of what you’re thinking, A/B testing works, and there’s plenty of evidence to prove it.

The problem with it, however, is that it’s just as easy to sabotage split tests as it is to set them up.

And in this post, I’m going to show you four mistakes you might be unknowingly making that kill off any A/B tests you run.

Intrigued? Then let’s get started.

#1. Running “Quick” Tests that Distract You from the Real Problem

You know, I’m sure most conversion rate specialists cringe when hearing the word “quick”.

But I’d also imagine it’s one of the most common ones uttered by their clients.

A/B testing mistakes

Here’s the thing, though:

A/B testing is not a quick-fix.

If you want to find out what page elements positively affect conversions, you first need to research and understand your audience, how they respond to your business model, identify potential problems causing low conversions, and finally, develop hypotheses about how you’re going to fix it.

And then, launch split tests to validate them.

That’s the only way to uncover the real solutions that can increase your bottom line.

“Quick” split tests, on the other hand, focus on the solution, not the problem. Testing different button colors isn’t a meaningful test. However, analyzing the performance of an “Add to Cart” button and developing tests based on the data, is.

#2. Believing that We Humans Make Rational Decisions Limits Your Testing Options

Fact:

As marketers, we often make assumptions about customers, their preferences, beliefs, and drivers of their behavior.

Based on data, we try to guess what would convince a person to make a buying decision, sign up for a mailing list or convert in any other way.

However, the problem is that we humans rarely work on rational basis

For example, we don’t base our decisions on logic but emotions

Antonio Damasio, a professor of neuroscience at the University of Southern California, conducted an experiment with people with damage to the part of the brain where emotions are generated and discovered that they not only couldn’t feel any emotions but…

…they also couldn’t make decisions.

His subjects could explain in logical terms what they should be doing. But making the decision to do it often proved too difficult for them.

We often avoid decision-making

For example, we often select the default option, if this means not having to make a decision at all.

Dan Ariely explained this behavior in his fantastic TEDtalk:

We’re also easily suggested

Walter Dill Scott, one of the first psychologists researching our responses to advertising wrote:

“Man has been called the reasoning animal, but he could with greater truthfulness be called the creature of suggestion. He is reasonable, but he is to a greater extent suggestible.” (source)

And think about it:

How often are you swayed from making a decision on a suggestion from a friend or a person you perceive as an authority?

How often do you buy a product because someone you hold in high regard has bought it too?

Or visit a particular holiday destination just because your friends have gone there already?

Just take a look at some subliminal messages Hilary Clinton has been testing for her campaign:

#1. Note the subliminal “Thank you” message on this page (in the background, upper left corner behind the pop up), displaying before the person makes a donation and pushing a person to make it.

#2. The “She’s with…” message (same spot, background, behind upper left corner of the popup)

And the more information we have, the poorer decisions we make.

A study from the University of Austin, Texas, proved that the more information we have about available options, the poorer choices we make, particularly about long-term outcomes.

Ok, but what’s that got to do with A/B testing?

A lot.

You see:

Assuming that the decisions your customers make are based on logic and reason reduces the number of potential hypotheses you could test.

Focusing always on a logical reasoning behind the decision means that you rarely test “unusual” scenarios. These, however, might lead to much better results than running yet another test on the location of the Sign Up button.

#3. Ignoring the Scientific Process Leads to Poor Test Quality

Look:

If test after test you run fail to deliver any meaningful results whatsoever…it’s most likely because you didn’t design them properly.

Perhaps you simply replicated other people’s tests or followed the best practices where you should have been designing your tests from scratch with the Scientific Method.

The Scientific Method is a set of procedures aiming at helping to formulate, test, and modify a viable hypothesis.

scientific

(image source)

The process begins with observation stage that leads to an identification of a problem and formulating testable hypotheses to test.

It then concludes with the analysis of the research findings, drawing conclusions that in turn become a foundation for new research and hypotheses.

I explained how to apply the Scientific Method to designing an A/B test in detail in this post.

But let me make something clear:

Using the Scientific Method doesn’t mean that you have to come up with every potential solution to your hypotheses from scratch.

Apps like Competeshark allow you to monitor tests your competitors are running and identify solutions you probably wouldn’t think of.

However, the Scientific Method should always form the foundation for every test you run. Always.

#4. A/B Testing at a Wrong Time of the Year Obscures the Results

You know:

Your buyers’ behavior changes depending on the time of the year.

For example, seasonality affects every variable in the buyer behavior. Just think of how shoppers behave during the Black Friday craze

black friday

Your company might also operate differently at certain times of the year.

You might be stocking seasonal products for a major event in your area or offering more promotions and offers than usually. You might be sending more emails., publishing more content, etc.

And needless to say, all this behavior will reflect on your A/B testing results.

As a result, you might never be certain if your hypothesis has worked or the shoppers’ seasonal behavior caused the test to deliver the improvement you were hoping for.

Therefore, as a rule, you should run important tests during natural business cycle periods to get the most objective data and avoid testing in the peak season, for instance.

Conclusion

You know:

A/B testing is a great way to increase conversions, reduce bounce rate, and drive more sales.

But…

It is as easy to sabotage your test as it is to set it up.

To avoid it, you should focus on the scientific method when designing your test, target the audience’s emotional mind and avoid testing at the wrong time of the year, at least.

 

Want to legally spy on your competitors’ A/B test?

Competeshark empowers you to keep tabs on your competitor websites, discover the exact tests they run so you can make smarter business decisions and save time and money.
Get started for FREE now.

The following two tabs change content below.

Pawel Grabowski

I'm the founder of Usermagnet - a content marketing agency for B2B SaaS companies. We create and promote content for you so you can focus on growing your SaaS.