1 Reply Latest reply on May 5, 2017 3:47 AM by Grégoire Michel

    Is there an erosion problem caused by no time limit on MT Attribution and the Implicit setting?

    Tony Lanni

      Our Marketo is setup with the implicit setting, so that opportunity credit is calculated based on all of the contacts on an account, not just ones with a role or that are part of the opportunity. We have to do this because our Sales team does not assign contact roles or add contacts to opportunities consistently, and it is very hard for us to get agreement from Sales to make this standard practice. We also deal with a lot of large enterprise accounts with numerous contacts. I've noticed something very interesting when looking at program performance and MT (success) attribution.


      As I understand it, with the implicit setting, Marketo will look at all successes by contacts on an account, and will spread credit for an opportunity evenly across these successes, with no time limit on when that success occurred. The issue I'm having is that this lack of a time limit has an erosive effect on new programs. For example, let's suppose we started using Marketo in 2015. For any successes on programs that year that turned into opportunities, the credit only gets spread among 2015 programs with successes. 2016 programs share credit with successes from 2015 programs. 2017 programs share credit with programs from 2015 and 2016. 2018 programs will share credit with successes from 2015, 2016, and 2017. So as time goes on and new programs are created, this ever-growing backlog of past successes has a greater and greater dilution impact. 


      I would prefer to limit the lookback to a set period of time, so that I can compare 2016, 2017, and 2018 programs on a level playing field (or early 2016 versus late 2016 programs on a level playing field). In other words, all programs would be diluted by 1 year of past successes, for example, not an ever growing list of past successes. But because this erosive effect grows as time moves forward, I don't have such a level playing field to evaluate programs from different time periods.


      Am I missing something here and looking at this wrong? Do others agree that this lack of a set (or settable)  time-frame on when to count programs is an issue, and that it doesn't allow accurate program comparisons? Does anyone have any suggestions for how I might be able to force the system to stop counting programs in attribution after a certain period of time?