Hi there,
So we recently tested a campaign and saw a huge difference between an email with images vs no images, which we now understand why. However, our team just created new email templates and performed the same test and we noticed that the open/click numbers are nearly identical in the tests. Curious as to why a new email template might have logged the same opens with images vs no images even though we know that images should have a lower open rate typically?
... even though we know that images should have a lower open rate typically?
As the others mentioned, the “typically” isn’t broadly accepted.
Bear in mind it’s impossible to have an open registered in an Email Performance Report without an accompanying click unless the lead enables images. An explicit Email Open activity requires images to be enabled.
(This can be confusing because an EPR treats clicks as if there was an open event, whether or not the open tracking pixel is downloaded. That’s not the same as an open without a click.)
I would caution against using open rates as anything that would determine success -- because of things like Apple Mail Privacy Protection or other email clients that pre-fetch images or load imagery (like the outlook review pane), opens could be artificially triggered by the email client and not the person receiving the image.
On the other side of that, if someone has images blocked, if they didn't click into your email, you may never know that they actually opened your image.
Hi @Kainelson ,
There is no thumb rule that 2 different test data will give different results.
But to understand it will be better if you can provide more details on type of A/b test you used.