Top 10 Essential A/B Testing Practices for Publishers

A/B testing is one of the most effective ways to increase conversion rates.

The problem is that most publishers get the process wrong.

A poorly implemented A/B experience can cause a lot of problems for your website. Implementing changes from a poor test will negatively affect your conversion rates and therefore your earnings.

On the other hand, doing A/B testing properly will help you double or triple your conversion rates without having to drive more traffic or trying to arbitrage.

In this article, we will look at the top 10 essential A/B testing practices to follow in 2022.

Let's get started.




1. Segment your audience


A/B testing gives you insights into what your users do, it doesn't tell you why they take that particular action. To conduct a successful test, you need to analyze the results in segments and not as a whole.

Most publishers conduct A/B testing in only two forms. So if form A wins, they assume users prefer it.

This strategy can be misleading.

Even if form (A) wins overall, variation (B) may still win in other sectors. There are several different factors that affect the conversion rate of your ads. You have to take all of them into consideration.

For example, if you segment your A/B tests by mobile or desktop versions, you may notice that ads on desktop convert better than those on mobile. This could be because your ad format is not optimized for mobile, hence mobile users do not have a good user experience, hence the conversion rate is low.

Even demographics can affect your test results.

Let's say you are running an ad for the sale of baby products. Who would be your ideal audience? - Newborns or expectant mothers. So if you are A/B testing this ad and you set your test parameter to only measure women by age, you will get a skewed result. If your ad is shown to all women, your ads will convert poorly.

You may decide to ask advertisers to edit your ad copy or call-to-action consideration. That's the problem. Whereas in reality the problem is that you are not targeting the right audience - expectant mothers or newborns.

Knowing small details like this can help you set up your ads better.


Note: You have to be careful when slicing. You don't want to be testing 10-20 things at once. You'll just end up with random, meaningless data.


To get a reliable result, divide your test into no more than 4 sections at a time.

Here are the common segments you can segment your ads into:

Ad targeting

  1. Gender
  2. Nation
  3. Age
  4. Interests
  5. Buying behaviours
  6. Customer audiences (as in the example above, testing is based not only on age but also whether the woman is pregnant or has a newborn)
  7. Marital status
  8. Education level

Ad placement

  1. Formula
  2. Location
  3. measuring
  4. Page load speed

You can also segment your ad based on ad type, bids, and what you're optimizing ads for; Want more engagement, conversions or clicks?

2. Make sure you get your sample size appropriate for A/B testing


To get a reliable result from your tests, you need to use an appropriate sample size.

Calculating the minimum sample size required for an A/B test prevents you from running a test with a small sample size that will give you insufficient results.

One way to calculate the ideal sample size is to use the A/B Tasty sample size calculator . All you need to do is enter your current conversion rate, minimum detectable impact, and statistical significance to get the desired number of visitors you'll need for the test.

Minimum detectable impact indicates what percentage increase in conversion rate you would like to see.

The standard statistical significance is 95%. Although you can change you based on your risk tolerance.

>

3. Don't end your A/B testing early


The duration of the test plays an important role in determining the reliability of the test results.

When A/B testing, keep in mind the absolute length of time to get the best results. When you finish your tests too early, you may end up making decisions from intangible outcomes that will negatively affect your conversion rates.

For best results, do your test for several weeks at a time. I recommend doing each exam for at least 3-4 weeks. The longer the test, the more accurate the results.

why all that?

Your website traffic is not constant. It changes from day to day. Various factors such as seasonal changes, promotions, and even actions from your competitors can affect your traffic. By doing A/B tests for weeks at a time, you'll have a clear representation of your sample size. This will ensure that you will get a more conclusive result at the end of the test.


4. Be aware of threats to the validity of your results


Getting the right sample size, test duration, and audience segmentation isn't enough to give you valid results. Other threats may affect the validity of your results.

Here are some common threats you may encounter:

hardware threat

This is a very common threat. It is also very easy to miss. This threat occurs when the tool you are using for testing sends faulty data. This most commonly happens when code is implemented incorrectly on your website. When you notice that you are receiving faulty data, finish the test and look for the root of the problem. Then, reset the whole test and start over.

date effect

This is when external factors cause your data to malfunction.

An example of an external factor is your competitors not running a paywall on topics they might be running under a subscription plan. This may affect some variables in your test. Most of your visitors might want to give them a try because they get access to free content.

It is critical that you pay attention to external factors that can skew test results.

selection effect

This threat occurs when you assume that some of your traffic is the whole.

For example, the visitors you get at the end of the week may differ from the visitors that land on your site at the beginning of the week. It is not correct to say that one part of your traffic is an exhaustive representation of all the traffic you receive. This is why point 3 is so important. Running your tests for a long time will help you avoid this threat.

5. Don't make changes in the middle of an exam

Don't rush to implement changes in the middle of testing. If you end the test early or introduce new items that were not part of the initial test variables, you will get unreliable results.

When you make changes in the middle of testing, you won't be able to determine if the new changes are responsible for increasing or decreasing conversions. Take no action until you have finished the initial test.

6. Test your ad placement regularly

Your position is one of the most important aspects that will contribute to the success of the advertisement. Optimizing the highest performing ad units can convert an average visitor into a lead. For example, first fold ads that are optimized for user experience are usually the first thing a viewer sees while scrolling, thus capturing their attention.

So, how do you A/B test your ad placement?

  • Test multiple placement variations for your ad: We generally encourage publishers to balance top and bottom placements by not overcrowding them and choose the ad formats that convert the most.
  • Trust your tech platform: An increasing number of publishers rely solely on Google publisher products to monetize their ad inventories. This is excellent news because Google Ad Manager reports are often the best starting point for improving your ad position. In addition, Google also updates the best performing ad sizes and formats according to its data.

7. Don't run too many tests in a row

It is not recommended to perform many tests in a short period. The reason is that you must take a significant amount of time to collect data before any experiment is conducted.

When you run too many tests in a short period, your sample size will not be significant enough to give reliable results. Changes will be implemented from half-baked results. This will only end in lower conversions. And since you don't see any positive conversions, you keep doing more tests and get stuck in a cycle.

After running the test, measure the results and decide what changes need to be implemented. When implementing any change, wait two weeks to see if it will have any positive impact on your bottom line. This way, you can say for sure what works and what doesn't.

8. Develop a hypothesis before testing

When you identify the problem you want to solve with your test, create a strong hypothesis.

The solid premise would:

  1. Explain the problem you are trying to solve with the test.
  2. Suppose a possible solution
  3. predict the outcome of the experiment.

A good hypothesis is measurable and will help you determine whether your test succeeded or failed. Without a hypothesis, your test will be just a guess.

The simplest formula for finding your hypothesis is:

Changing "A" to "B" will cause "C" to occur.

where;

  1. A = What your analysis indicates is the problem
  2. B = What changes do you think will solve the problem?
  3. C = Impact of changes on your KPI.

9. Ask your visitors for their opinions

A/B testing will help you visualize your visitor's path to conversion. But that's about it. While the science of A/B testing is crucial, you need to understand how your customers feel when they interact with your website and ads.

How do you know why a visitor came to your website? How do you know why they don't click on your ads or sign up for your services?

This is where the request for feedback comes in. Gathering direct feedback from visitors removes the guesswork. Survey your visitors to help you understand their goals and the difficulties they face with your website. This will help you decide what to test.

From the feedback from your visitors, you will be able to determine which variables have the greatest impact on your conversion.

10. Start and stop your tests on the same day

While this may seem like a no-brainer, you'd be shocked at how many people ignore it.

For a perfect result, you should start and stop the test on the same day of the week. It keeps each variable constant during the testing period.

Final thoughts

The last thing to remember is that you should never stop testing. Conversion rate optimization is the most effective way to increase sales or subscriptions to your website.

As you collect more data over time, be sure to keep testing. A/B testing is one of the most effective ways to understand your visitors.

Getting new traffic is expensive and can be difficult. The most sensible option is to increase the conversion rate of the visitors you already have.

The above A/B testing best practices will help you get started.

Next Post Previous Post
No Comment
Add Comment
comment url