Hi Marketo community,
I'm wondering what the best practices are around when to start reporting on email metrics. My hunch is that 48-72 hours is enough time for the majority of behavior to be captured, but does anyone have hard stats around the best timing? Additionally, how long should I be running subject line and CTR A/B tests? I've been running them for 7-10 days at a time but would love any additional insights here.
Thanks,
Matt
I am struggling to find resources to share that back this up (I'm sure I've seen it validated in a report on litmus somewhere...) but my experience, and I suspect this is consistent with most others, is that the vast majority of engagement happens within 24 hours of the email being sent. But like everything there's probably some variation here.
As for running tests; I run them for as long as is required to reach a statistically significant result. For me this is a pretty major flaw in Marketo's A/B and Champion Challenger testing functionality - there's no statistical significance calculator and it's easy for marketers to make poorly informed decisions here without some knowledge on stats. There's great statistical significance calculators online, and basically I stick with a >95% model on everything. Depending on the size of your audience + test groups, and on your success metric, the time it takes to reach statistical significance may vary. If your success is based on email engagement, if you haven't reached statistical significance within 24 hours, you're probably unlikely to reach it without another send.
Thanks Grace! 24 hours makes sense to me and so 48 hours should be sufficient as well. I've run a dozen tests in the last few weeks but only 3-4 of them have come up as significant (using 95%). Is it more likely that my tests just aren't different enough to garner significant results? I can't imagine running a subject line test for a full month just to be told that the champion was already better-written.
Yeah if 48 works for you and you're not in a rush
There could be a number of reasons why you haven't got many statistically significant results; it could be that your sample sizes are too small (thinking not just about your sending audience but your success metric), it could be that the differences are too minor/insignificant to have impact, or it could just be an interesting example of things that are both performing equally as well (or poorly!) as each other.
When my users are running A/B or Champion/Challenger tests, I always encourage them to test things that are meaningful - that have potential for longer term impact on our ways of doing things, and be strategic about it. That could mean running multiple tests with effectively the same objective - say, do long subjects perform better than short ones - to do our best to control against other factors that could influence the test. But finding the things that people don't care about, that don't impact performance - that can still be a good learning!