I've done an A/B test using the whole email variation test type with an audience of 3000 people. The A/B test was done on everyone (I’ve selected 100% thus I haven’t selected any winner criteria). So far 1 email got 45% more opens than the other (that’s about 60 people). I’m confused because the subject and preview text are the same… the low opens one even had slightly more delivered (10+ people) than the other one.
The audience for the test is big enough to make a statistically significant conclusion that the subject line or/and preview text in As is much better than in B however it’s not the case since both are the same.
Any idea why that would happen?
Which criteria does Marketo use to divide the audience into A and B?
How Marketo makes sure that audience A has the same attributes as the audience B?
I feel like Marketo didn’t divide the audiences “fairly”
My issue now is how can I rely on Marketo to run the A/B test for us?
Solved! Go to Solution.
The audience for the test is big enough to make a statistically significant conclusion that the subject line or/and preview text in As is much better than in B however it’s not the case since both are the same.
The distribution is random, there's not expected to be any clustering/unclustering by attribute values.
The audience for the test is big enough to make a statistically significant conclusion that the subject line or/and preview text in As is much better than in B however it’s not the case since both are the same.
The distribution is random, there's not expected to be any clustering/unclustering by attribute values.
Knowing this how do you run A/B tests in Marketo to produce accurate results?
"accurate" is too vague, exactly what distribution algorithm are you expecting?
Most actions affect one group of our users more than others. So when we run A/B tests we want to make sure that the 2 groups are roughly equivalent so the differences in their behavior can be attributed.
If you're asking which specific distribution (binomial, normal, t- distribution, Chi-Square, etc) we want... then unfortunately I don't have the answer to that.
Do you know which distribution is used by Marketo for A/B tests?
Most actions affect one group of our users more than others. So when we run A/B tests we want to make sure that the 2 groups are roughly equivalent so the differences in their behavior can be attributed.
If you're asking which specific distribution (binomial, normal, t- distribution, Chi-Square, etc) we want... then unfortunately I don't the answer to that.
Do you know which distribution is used by Marketo for A/B tests?
Think you're overthinking it. It's equivalent to randomly shuffling the set using a (P)RNG, then slicing into 2 subsets — i.e. based on uniform distribution. There may be slight bias introduced by the exact implementation (i.e. crypto hash vs. RNG) but I strongly doubt that's causing what you're seeing.
I'm pretty sure I know what's the issue. One variation which had a gif went mostly to the promo/others tab..or maybe even the spam folder where the other went to the primary inbox that's why such a big gap in open rates though the subject and preview texts are the same.
Makes sense. You're always implicitly testing deliverability & inboxing alongside user engagement.