Impact of running A/B test across 100 test sample size

Halid_Delkic
Level 3

Impact of running A/B test across 100 test sample size

We want to run an A/B test on an outgoing email to our prospects database to promote a whitepaper download. 

However due time sensitive nature of the whitepaper, ideally we'd like to dispatch the email as soon as it's published.

So plan is to run an A/B test, using an email programme, but increasing the test sample size to 100% - i.e. reaching everyone on the mailing list, without waiting do declare a winner. (Idea is to use test results for future campaigns).

Curious to know if anyone has used the email programme this way? I'm struggling to think of any negative impact. Any thoughts/insight are welcome.

Thanks,

Halid

Tags (1)
5 REPLIES 5
Anonymous
Not applicable

Re: Impact of running A/B test across 100 test sample size

Hi Halid,

I have thought about this before and have also never thought of or found a negative reason for doing this, if anything it works quite well because you send this as an A/B test you get the useful comparison metrics.

Simon
Dory_Viscoglio
Level 10

Re: Impact of running A/B test across 100 test sample size

Hi Halid, I'm not sure if it was an isolated incident that I had because Marketo never was really able to clarify, but last time I tried to send my sample to 100% of my test group, it only sent to a percentage (I want to say it was about 75%), and then it sent the winner to the remainder of the list at the end of the test.

This could have been an isolated incident, and maybe sending to 100% of your DB at a time works, but since this issue I haven't tried it using email programs. I set up two separate emails, and use a flow step that says "If random sample equals 50% send email A, otherwise send email B". This is also nice for me because I have my standard reporting which I prefer to look at.

Anonymous
Not applicable

Re: Impact of running A/B test across 100 test sample size

Hey Halid,
Dory's incident above may be something to be concerned about so I'd double check on that with support. A potential negative to doing a 50/50 test is if one does much worse than the other you would essentially be wasting half of your send. However, you will have to decide if that is important in this campaign. If this campaign isn't essential you can afford the risk then it would put you in a better position for the next campaign.
Anonymous
Not applicable

Re: Impact of running A/B test across 100 test sample size

My 2 cents - if you are worried about experiencing Dory's situation in this particular case, just don't use an email program - use a regular smart campaign instead and do 1 email send where half your smart list gets email version 1 and half gets version 2 with the random sample condition in the flow step.

To me, the only real benefit of the email program is that ability to automatically select the winning version of your A/B test and schedule the subsequent blast ahead of time.  If you are not doing a 'winner' blast after your A/B, I don't see a reason to use the Email Program and be subjected to potential glitches, just not worth it!
Halid_Delkic
Level 3

Re: Impact of running A/B test across 100 test sample size

Thanks for the feedback everyone.
 
FYI - I ran the mailing with a 100% sample size, and to my knowledge have not encountered any negative impact.
 
For anyone interested, a few things to note:
 
Even at 100% sample size, you still need to set a winner criteria and schedule a time for winning email in order to approve the mailing.
 
In my case, mailing list was created using smart lists (i.e. it was not a static list).
 
So by the time the A/B test was completed, a handful of new leads had been entered onto our database who matched the criteria in my campaign selection. They received the winning email.
 
As for analytics, Email Performance and Email Link Performance reports both display the mailing stats in a single row…as one complete mailing, rather than two separate emails.
 
For insight into individual email performance, I used the Email Program dashboard analytics.
 
 
And for anyone interested in my A/B test results – it was a tie, so back to the drawing board J!
 
Halid