Hi! My company sends out around 3 newsletters a week. What I'm wondering is how to best format that so I can still properly A/B test each one? Would that necessitate creating a new program each time or should I just create a folder structure, clone the A-variant email for the B version and A/B test in two separate campaigns?
Thanks for any help!
Hey Steve,
there are 3 methods to A/B test emails in Marketo:
1. A/B test in email program - Email Test - A/B Test - Marketo Docs - Product Docs
2. Champion/Challenger in the email itself - Email Tests - Champion/Challenger - Marketo Docs - Product Docs
3. Random sample in the smart campaign - Some Tests on Random Sample Behavior
Here is my take on their pros, cons and preferable use cases:
1. A/B test in email program - Pros: fast and easy to set up, works best for one-off email blasts. Cons: a/b test data is only available in the email program, it won't show in the email performance reports
2. Champion/Challenger - Pros: easy to set up, works best for recurring/triggered emails. Cons: a/b test data is only available in the test, won't show in the email performance reports
3. Random sample in the smart campaign - Pro: data is available in analytics, arguably allows more flexibility than the other two mwthods. Cons: you will need to create 2 or more separate emails to use it
The simplest way to test your newsletters would probably be through an email program, but it all depends on your set up and how you prefer to analyze the data
This is great, thank you so much! So I guess it doesn't matter if Im declaring a champion anyway.
Why then would you think the email program would be best for me considering these are individual emails I have to build for each blast, but involves utilizing a single smart campaign?
It all depends on how you are using these newsletters and what you are trying to achieve with A/B testing.
If you have an email that you are planning to send just once, and you want to get the most out of it, use the email program. Send the test to a small percentage of the target audience, see which version performs better and send it to the rest of the target group. You don't have to create two separate emails, and you can make the program to declare a winner automatically.
If your newsletters are part of a nurture stream in the engagement engine and are being sent on the regular basis, the easiest way to A/B test them would be through Champion/Challenger. Let it run for some time and then declare the champion.
Method 1 and 2 are intended to improve the performance of an individual email, so you would have to declare a winner once you have enough data.
If you want to test a hypothesis, let's say you want to see what works better in a subject line - a statement or a question or something like this, use the random sample. You will have to create 2 emails, but you can then pull reports on all the emails tested through the random sample and analyze the data.
Let me know if it makes sense.
Hey Steve, regardless of how you end up setting up the actual email assets, I wanted to include one small piece of information that relates to newsletters/consistent sending of something to the same group of people.
If you don't use email programs, be sure that for each send to people you use a new smart campaign. The reason for this, is that Marketo will only track the send of something to someone the first time in a smart campaign. We had an issue where we tried to have 7 programs, 1 for each day of the week, and we set the reporting timeframe to just the past week. The first week everything looked fine, but the second week the only results that we saw were for people who happened to be new to the smart campaign that week, because Marketo wouldn't report on people who it had already reported on. It sounds a little strange, but the end result was that we had to create a new smart campaign for each day's send. Let me know if you have any questions on this.
Also, my two cents: I always use smart campaigns and random sample, with two unique assets. It allows me to look back and see the test that we ran, as well as the results -- forever. Email programs have given me some issues in the past (that I'm sure have been resolved, but once I've been burned I tend to stay away), and champion/challenger tests lose their data once you declare a winner.
Good points, Dory!
I also prefer to do my a/b tests through the random sample, email programs didn't give me any major issues, but I see them more as a quick and dirty way to send one off emails
Thanks Dory and Iryna! You're insights are so helpful!
How we have it set up right now is one email program called "Newsletters." Within that we have an email folder, a smart campaign folder and reports folder. We send newsletters 3x a week and we have to create a smart campaign every time we create a new newsletter. I realize this is probably the most logical approach for us (and to use random sample for A/B testing). None of our emails are triggered. It's all batch. To me, creating an email program 3x a week doesn't seem logical for some reason and seems like a misuse of the feature. Am I wrong?