There are numerous ways to resolve this. I decided to rely on program statuses and did the following:
utm_term=clicked-bottomin the past
Additionally, I also added anyone who clicked those links to a global list for safe keeping -- this was mainly as a sanity check to know who's email clients were click-happy.
Whether right wrong or indifferent it worked for me...
Linked to your... home page? That means the mail scanner needs to do more work and every send puts more load on your webserver (50,000 emails could == 50,000 hits).
The effectiveness of this method aside, you don't need to link to an extant page to test it. The Clicked Email activity is registered for URLs that do not exist in any publicly-accessible way.
Keep in mind this was my solution almost a year ago with a small db. One thing I did notice is that the link click was tracked in Marketo, but I never saw anyone actually hit the page with that link -- then again that was in Marketo "viewed page" and not GA. Good point Mr. Sanford Whiteman . #badidea
Right, even if the hit wasn't logged to GA or Marketo, i.e. not via JS, it's placing load on your webserver (not necessarily fetching secondary assets but fetching the main document). Reminiscent of this: https://blog.teknkl.com/flop-timized-marketo-lps-and-the-case-of-the-350kb-favicon/. In any case, for this particular approach there's no need for a real fetchable URL.
PS. Why not use the link ID instead of UTMs? Same outcome.
I'm currently testing a bandaid for reporting and scoring. I created a smart campaign triggered off of an email click OR email delivered with filters saying the click and delivered happened in the last minute with the flow adding them to a static list called "suspected false clicks". I'm going to review the list over the next few weeks and see how accurate what we captured is. I'm debating adding a min of 2-3 clicks as well. If I can narrow it down enough, we can use it to exclude from reporting and scoring.
But how will you be testing the accuracy?
Problem with a lot (though not all) of these approaches is that you'll have false positives + false negatives but will never know it, by definition.
Manually reviewing activity logs to check for red flags (click before delivery, similar behavior from all leads at the same company, clicks but no open, etc.) and making a judgment call. There's not a perfect system, of course, but doing nothing is also unreliable. So my goal is just to get it narrowed down to a point where it's at least more reliable than with no intervention.
Frankly, I haven't heard a solution that I'm thrilled with and am all ears if you've got other ideas.
We have a hidden link at the top of our templates and if that is clicked we know it is a bot and use a smart campaign to remove from flow.
<td width="1" height="1" style="margin: 0 auto;display:none;visibility:hidden;width:1px;height:1px;" align="center"> <a href="https://www.skyboxsecurity.com/?botlink" style="outline: 0;text-decoration: none;"></a> </td>
This is the same thing David is mentioning, and again you shouldn't be hitting your actual homepage. That could cause a self-DoS.
So would Debbie's solution work better if the href were a bogus url? It would be terrific if we could get an drop-in example that avoids the potential drawbacks. Thanks!
(And just BTW: it's not just spam filters doing this. Increasingly, AI-based email solutions are reading content in order to tag and organize it recipients. Not pertinent to the solution here, but there might eventually be an avenue to understand how email content can play nicely with these as well.)