The reason I ask is that I've just submitted a support case that shows highly contradictory evidence that the testing dashboard data is accurate.
This was a whole email test with option A being HTML and B being text-only. Marketo's out of the box reporting shows 3 opens for B and zero conversions for either option.
However, after digging into the overall conversions for the past 24 hours (below) I was able to find 6 total conversions. Digging into the individual's Activity History confirms they converted from this campaign.
Any thoughts? Should I just conduct my own research to manually declare the winner going forward? And if I have to do that, what good is the reporting provided?
Thanks in advance.
2018-04-25 11:17:16: Downloaded A Corporate Counsel's Guide to Successful Contract Lifecycle Management | B |
2018-04-25 11:18:42: Downloaded A Corporate Counsel's Guide to Successful Contract Lifecycle Management | A |
2018-04-25 11:28:31: Downloaded A Corporate Counsel's Guide to Successful Contract Lifecycle Management | B |
2018-04-25 11:49:55: Downloaded A Corporate Counsel's Guide to Successful Contract Lifecycle Management | B |
2018-04-25 13:25:31: Downloaded A Corporate Counsel's Guide to Successful Contract Lifecycle Management | B |
2018-04-26 03:06:19: Downloaded A Corporate Counsel's Guide to Successful Contract Lifecycle Management | A |
Hey Lisa,
My first thought is that you may not have enough in terms of sample size to get meaningful results. I would do at least 500 on both sides, which is still small. I could be wrong, maybe you are testing larger sample.
Another thing is that you should be measuring click to open rate as your winning criteria if you are doing whole email tests.
This is a great test by the way, I did same one and text emails performed better every time.
Lisa,
I always declare an a/b test winner manually. The reason for this is because there are so many false positives when it comes to metrics like opens and clicks. Additionally, each email can have it's own success metrics that are not necessarily measured by opens and clicks. For example, regardless of opens and overall clicks, if an email had better click-thru rates for a specific link, that might be a better success metric than overall clicks.
For these reasons, I also let the test ride and manually choose a winner when I feel the email performed it's goal better than the other test variant.
Interesting responses. Thanks!
It was a rather small sample size, around 500.
I've got some set up for next week and will manually declare the winner.