Hi, Our lead scoring model has been in operation for the past 12 months. Based on feedback from sales and reporting, we are planning to refine our lead scoring model to improve the quality of MQL passed to sales. I guess my question is is it best practice to refine the individual scores (e.g. views piece of content) or refine the score thresholds e.g (MQL moves up to 70)
For context we are finding we are passing too many non sales ready leads through, so need to either reduce our individual scores, or make or individual thresholds bigger, so propsects stay in the funnel longer.
any advice on best practice lead scoring model optimisation would be appreciated,
Is the problem generic, or are you scoring the wrong things? The key question is what the leads that did convert have in common. If you see consistency in demographics or behaviour there you have the clou on which elements of your scoring model should be increased or decreased. Similarly, what did leads that were rejected have in common? Which elements were you overscoring?
Changing the MQL threshold without looking at the detail is almost a certainty not to work.
To add to this: if you have solid attribution in place, then you can tell which behaviours are actually impacting conversion, and adjust your scores accordingly.
Eg. get all leads, converted and unconverted. Standardise actions they took. Run a regression model over it and see what comes out as being heavily correlated with conversion. Then you can score those actions accordingly. Rinse and repeat as time goes on and consumer habits change.
That's not quick or straightforward but it virtually guarantees your leads will be highly qualified, and you can adjust the targets to let through the top 10% of leads / the top 20% of leads / whatever makes sense based on the capacity and needs of your sales team.
Are you also scoring more "gold standard" actions, such as visiting your website (or sections of your website) and filling out forms? You may already be doing this, but, maybe you need to redefine the different ways people can score.
In a targeted scoring program we had, we noticed that people were reaching MQL as a result of what was most likely spam-checker clicks. Over time, we're noticing that bots (and now MPP) are causing us to need to rethink how we score and what we score, so that we can ensure that it's actually people engaging with our content, and not a bot.