Random Sample for Custom A/B tests

Highlighted

Random Sample for Custom A/B tests

Hi all,

We want to try out an 50/50 A/B email cadence test. Basically we want to see if emailing our leads half as much will generate the same success.

I ~think~ this is how I would go about doing it:

  1. Create two static lists, one Control Cadence and one Test Cadence.
  2. Create a smart campaign with the member of smart list in our Leads list, and the flow is If Random Sample is 50 list is Test Cadence, default is Control Cadence
  3. From there, I would set up my regular campaigns as usual with the Control Cadence list as the audience. I would set up the test campaigns (half the amount of the regular campaigns) with the Test Cadence list as the audience

Am I missing anything else? All input appreciated.

Thanks!

3 REPLIES 3
Highlighted
Level 10 - Champion Alumni

Re: Random Sample for Custom A/B tests

Sounds like the right path. What is your success criteria? How will you know what happened?

For example, if you have RCE, you can watch funnel velocity if you've carefully setup the Programs instead of just separate streams.

Highlighted

Re: Random Sample for Custom A/B tests

Hi Josh,

Success is based on a metric that is tracked outside of Marketo (# of appointments created). We do not have RCE.

My problems with this test are:

1.) Having to create 50% more campaigns

2.) Our lists are defined by a person's interactions with us (booking an appt, canceling an appt, purchasing a product, etc). During the test, if an action happens that would change which list a person belongs to, say for example a lead that purchases a product, they would still be included in the test and getting lead-specific emails because the static lists wouldn't update. The only option I see to prevent that is to create new Test and Control static lists ahead of each email send to be sure I am getting the right people.

Highlighted
Level 7 - Champion Alumni

Re: Random Sample for Custom A/B tests

If you want to test if emailing your leads (I assume you mean multiple times) more times or less times improves their responses, then surely using behaviour to switch them between static lists ruins the test?

If you do need to switch them you'd need smart campaigns listening for behaviour and doing add to and remove from list actions to switch them between the static lists, but it kind of sounds like this would defeat the purpose of the test?

I'd be pushing them into separate control groups (your methodology for doing that with random sample sounds fine), run separate email programs using the people in each list to track successes, and then do a comparison of response rates (or whatever your engagement criteria is) after a defined time period or number of tests.