Good idea! I just voted for it.
It's not a perfect solution, but maybe a way around a 'fake' page view being triggered is to use the forms 2.0 API - embed a hidden form in the page and check for the form submission in the campaign instead of a page view (include the URL of the page as a hidden field in the form and you can easily re-use). Then, depending where the page is hosted (either on your own server or a mkto landing page), you can also add a layer of checks to make sure it's a real browser before allowing that form script to run - eg check user agent header, maybe a delay of a second, assuming a bot's session is very short - of course, if a bot really wants to mimic a human, then it's going to be pretty hard to detect, but that should help weed out the most obvious ones.
Note: if the page is accessible publicly as well as from your email, then you'll also likely want to add some conditions around loading the form script so that it only runs for known users, otherwise, you'll end up with a whole lot of unidentified users that count towards your allocated users (because as soon as a form is submitted, it counts as an identified user, even if name and email are blank)
of course, if a bot really wants to mimic a human, then it's going to be pretty hard to detect
Right, and the inability to distinguish between the two is key to scanner technology working at all. You can't distinguish based on User-Agent because scanners purposely use real, headless browsers. If you could check based on the User-Agent, then so could the person running a malicious site, and the scanner would be rendered worthless.
If you require form submission before registering the initial hit, then it's even easier than what you've described! Problem is that marketers want to know if someone clicked, even if they don't convert via form. That's the hard part.
Wouldn't you just use request campaign and add the filters in the requested campaign's smart list?
RE: detecting link clicks after clicking through from an email AND visiting a webpage, doable using munchkin API:
if(location.search.indexOf('mkt_tok') != -1){
$("a").addClass("tracklink");
$("a.tracklink").click(function(){
Munchkin.munchkinFunction('visitWebPage', {
'url': '/email-visit-click'
});
});
}
Will add the class "tracklink" to all <a> tags if mkt_tok is present in the url search string. adds a listener to a.tracklink so when it's clicked it fires a munchkin visitwebpage to Marketo. You can then use the visited webpage filter for everyone who's visited "/email-visit-click"
People seem to think this is easy to solve using only the tools available in the browser. It's not.
Just replying to Dan's post..
But now that we're faced with a third variable - and since Marketo doesn't support conditional "choice" logic (where we would need a second level choice of "clicked link on specific web page") - we're kinda stuck. Heck, there's not even a choice that allows you to select the link constrained by a web page. I believe Sanford has been toying with a custom activity that would identify when all three of these actions have occurred, but nothing's been published as of yet.
1. Checking the network traffic in chrome, the custom munchkin event does fire and the made up page is also pre-populating in visited webpage dropdowns in Marketo
2. This is just a suggestion to which the goal was to track 1. Email click -> 2. Visited webpage -> 3. Clicked link on webpage. But all in one session starting with the email click.
3. Again, this would let you track 1, 2, and 3 above without worrying about historical click activities. Plus I suppose it'd make things easier because you can just listen for a single visited webpage activity
1. Checking the network traffic in chrome, the custom munchkin event does fire and the made up page is also pre-populating in visited webpage dropdowns in Marketo
You haven't tested this sufficiently. The definition of an async XHR is that it cannot be guaranteed to finish before the page unloads. Sometimes an XHR will complete, sometimes not. Network conditions, the speed of the subsequent page's TTFB, and browser heuristics are all involved. That uncertainty is not up for debate, it's been understood for like 15 years now, and it's the reason that the alternative Beacon API exists (in some browsers, unfortunately not all). Munchkin doesn't use beacons unless you use my adapter but even so will never work in Safari and IE.
2. This is just a suggestion to which the goal was to track 1. Email click -> 2. Visited webpage -> 3. Clicked link on webpage. But all in one session starting with the email click.
If you change the goal to "There must be a deliberate human interaction after the pageview" that's a new goal, and you don't need to do a fake Visit Web Page for for that. But we can't keep moving the goalposts of what a Click Email represents, or used to represent!
3. Again, this would let you track 1, 2, and 3 above without worrying about historical click activities. Plus I suppose it'd make things easier because you can just listen for a single visited webpage activity
You can already listen for a single Click Link activity constrained by the web page, which will always be logged for the same click event you're adding a second event listener for. I'm not understanding what else you get from a fake Visit Web Page, even if it were made reliable.
Hi Dan,
You can use smart list and "member of smart list" in your choices if you need some advanced logic.
-Greg
Hi Greg - yeah, we use this approach for some of our smaller requirements. This would not be a scalable approach here though - given the number of different types of emails we send and landing pages where we're driving traffic to.