SOLVED

Measuring scoring success / effectiveness

Go to solution
Highlighted

Measuring scoring success / effectiveness

Hi, 

 

I was wondering what the best ways are to measure how effective scoring is and how much impact it's generating. 

In theory, scoring should:

  • reduce a lead's life cycle (i.e. leads are already well informed so they are ready to move on by the time they are called)
  • make sales teams more efficient (i.e. less calls, more phone pickups)
  • improve conversion rates (i.e. if now only high scoring leads are shared with sales, the assumption is they will convert better since they are the most engaged)

but, scoring is also a long-term project that has to be monitored over time, which can make attribution a bit more complicated. If you have to let it run for 6 months, who is to say that that conversion rate increase you are seeing isn't due to some other project interfering? 

 

So I was wondering if anyone has insights or examples into how to: 

  • Decide on KPIs to measure (i.e. number of calls, MQL-SQL ratio, avg Lead to Qualified time, sales team satisfaction) 
  • Create and follow test and control groups over a longer time, keeping other projects from interfering
Tags (3)
1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted
Level 6

Re: Measuring scoring success / effectiveness

Yes, the first approach is validation of the model, while the second is validation of the actions you are taking as a result of the model.

 

I believe you could launch either test at any stage though? What's to stop you holding a control group from your new leads that come into Marketo in May, for example? 

 

The main problem would be the one you have mentioned - volume to make sure that your campaign on those leads, in amongst probably 15 other touchpoints, actually shows an impact. Unless your campaign is super impactful, you won't see that impact in the stats until you hit quite a high volume.

View solution in original post

4 REPLIES 4
Highlighted
Level 6

Re: Measuring scoring success / effectiveness

You have the right sort of thinking - all of those things could be impacted by lead scoring, and by measuring them, you can hopefully see the impact. There are two possible roads you could go down:

 

* Validation of the different levels. Work out what score level everyone is at right now, then check on conversion percentages x days down the track (where x is a reasonable amount of time for your sales cycle to complete). If your best leads, as per the model, convert at the highest rate, then your model has some validity.

* A/B test actions taken as a result of the model. If, for example, your best leads go into a particular nurture, then hold out x% as a control group. Once you have enough volume to determine statistical significance, you should see that the leads who have been through your nurture convert at a higher rate. This isn't validating the actual model, but it's validating the insights and actions, which drive business results. This will also get over your "seasonality" question - where you might have increased conversions due to something external to the model. Given the control group will have had no action taken through the model, if the alternative group shows higher conversion, you can prove its worth.

 

Maybe focus on the most important business metric to start with - eg. % of SQL closed, given time of the sales team is expensive. 

 

We're B2C, but when we first instituted lead scoring, we came up with the scoring system as best we could, then divided into groups. We then measured conversion to make sure it was increasing as the engagement level increased. Then, as you build it into your analytics stack, you can start to build out longer term A/B tests to show the value.

Highlighted

Re: Measuring scoring success / effectiveness

Hi @Phillip_Wild

 

Thanks for your answer. So you suggest two approaches a direct one (by comparing conversion rates of highest scorers against conversion rates of lowest scorers) and an indirect one (by having a portion of top scorers go through some other campaign while keeping a control group that doesn't). This second approach I'd say validates more whatever other campaign is launched as a result of high scores. 

 

One of my biggest concerns would be to make sure that we set any needed mechanisms in place correctly before launching. According to your first approach though, it seems you can measure this at any point, without setting up anything in particular in advance right? You just have to be sure to get enough volume over a time period that makes sense for our sales cycle. 

 

 

Highlighted
Level 6

Re: Measuring scoring success / effectiveness

Yes, the first approach is validation of the model, while the second is validation of the actions you are taking as a result of the model.

 

I believe you could launch either test at any stage though? What's to stop you holding a control group from your new leads that come into Marketo in May, for example? 

 

The main problem would be the one you have mentioned - volume to make sure that your campaign on those leads, in amongst probably 15 other touchpoints, actually shows an impact. Unless your campaign is super impactful, you won't see that impact in the stats until you hit quite a high volume.

View solution in original post

Highlighted

Re: Measuring scoring success / effectiveness

Yes, it seems we will just launch and test whenever. 

Launch early rather than late to collect enough volume of information and then later on try to find the right thresholds for the model and also if there is a good correlation between high scores and conversion. 

Later still, we may perform a test where in one of our markets, one office will prioritize by scoring and the other will work as control group. 

We'll see how it goes 🙂 

 

Thank you for your responses