Skip navigation
All Places > Products > Blog > 2019 > August
2019

I obsess over questions like “What is a webhook?” like philosophers contemplate “What is consciousness?” (Or like others wonder “Are we merely 3-dimensional cartoons watched by N-dimensional beings?”)

 

It’s a different kind of search for truth, but better for your professional career.

 

  • Without the true (that is, RFC-based) answer to “What is a URL?” you’ll write a broken UTM parser.
  • If you use the term “CNAME” incorrectly, IT won’t take your requests seriously (or will botch the result).
  • If you don’t know what an SPF record can be, and when it, shall we say, ceases to be, your  deliverability will take a hit.
  • Without a (technical, not Shakespearean) dive into “What is a name?” you’ll mess up lots of things.

 

Standards, papers, and real-world experiences can reveal much wider definitions than you previously thought. The point of this post, for example, is that webhooks can take many more forms than you first imagine. And that should be exciting.

 

It starts from the start

The first mistake people make about webhooks is confusing (1) the webhook configuration (say, in Marketo Admin); (2) the webhook trigger (an event hook, which in Marketo means a triggered Smart Campaign); and (3) the remote web service that receives the webhook.

 

Far too frequently, (3) is mistakenly called the webhook. But it’s not.

 

You see, “the webhook” is the outbound HTTP request issued by a webhook-supporting app. It’s not the remote server that you connect to in hopes of a useful response – rather, it’s a feature of the platform that listens for the trigger. That is, for our purposes today, it’s a feature of Marketo.

 

Of course, without a webhook-compatible server on the other end, you won’t make successful use of webhooks! But you’re still sending a webhook, even if the other side is completely down.

 

The requirements for a server to be webhook-compatible are:

 

1. It runs over standard HTTP or HTTPS (that’s the “web” part). FTP servers, SMTP servers, servers that speak exotic binary protocols won’t cut it.[1]
2. It does not require multiple connections to perform a requested action. Webhooks are stateless. That means that OAuth-based systems requiring one HTTP connection to get an expiring token, followed by another connection do a lookup or update, aren’t webhook-compatible.[2]
3. It completes its requested action fast enough to handle invocation rates and within the HTTP timeout of the webhook side. “Fast enough” is a bit squishy because it depends on how many calls can be processed in parallel. Across the universe of webhooks, < 5s (including network time) is generally acceptable; < 1s is ideal. In Marketo specifically, < 30s is mandatory.[3]

 

Also note the creators of a service need not know it’s compatible to be compatible.[4]

 

The server doesn’t need to be a full-fledged service
You were supposed to notice something: the requirements above don’t include running a particular web language (PHP, Java, C#, JavaScript) nor any language at all!

 

That's because a webhook is still a webhook, even if it fetches static files – as long as the connection is over HTTP and finishes fast.

 

And a webhook need not fetch a different static file when called for different leads. Take a webhook you use to set default values. This could always get the same static JSON file defaults.json:

 

{
"FirstName" : "N/A",
"LastName" : "N/A",
"Company" : "[No Company Name]",
"Email" : "user@unknown.invalid"
}

 

This single ’hookable file serves a useful purpose, though it’s far from what people think of when you say “webhook”!

 

Taking it to the next level, you could use a set of static JSON files named by email address:

 

 

And a lead’s personal file could be dynamically fetched by the URL https://static.example.com/{{Lead.Email Address}}.json. For key accounts this would come in handy.

 

(You wouldn’t likely create tons of .json files by hand, in any case, but rather export them from a line-of-business app of some kind. The takeaway is that once static files are deposited on a webserver, no code needs to run to enrich a lead with their corresponding data.)

 

A static(ish) file that allows lookup

So far, the static file examples send back the  full contents of a static JSON file.

 

But what about doing a lookup within a large set of data to only return matches for the current lead?

 

That’s usually where people turn to a formal SQL or NoSQL database and, naturally, need to write the code to query that db. No longer is the webhook just fetching a prefab file with JSON. Instead, the webhook calls a web service, which is written in JavaScript, or C# or PHP or what-have-you. Then that service code queries the database using SQL or some other dialect, reading query info from the POST payload or from the GET query params.[5]

 

So when someone says, “Just use a webhook” (and I’m plenty guilty of this myself) unless they’re talking about calling an already-published service like Twilio SMS, they imply some kind of code. Perhaps simple code (to a developer’s eyes) but not no code.

 

But for certain types of data, it’s possible (and here is where this post starts to get strange) to create a webhook-compatible lookup service using only static files and standard HTTP headers. No code at all.

 

The key is HTTP Range (a.k.a. byte serving).

 

Homing in on Range:

The HTTP Range feature is amazingly ubiquitous: if you streamed a short video

today, you used it hundreds of times!

 

It works in such a simple way: you request a resource (URL) from a server and include a Range: header with a specific range of 1 or more bytes.[6]

 

The server, instead of responding with the entire resource, only sends you the bytes you requested.

 

So imagine you have this 26-byte, dirt-simple ASCII file shortsample.txt:[7]

 

ABCDEFGHIJKLMNOPQRSTUVWXYZ

 

And then you send this HTTP request:

 

GET /shortsample.txt
Range: bytes=0-4

 

The byte position always starts from 0. So this will get these first 5 bytes:

 

ABCDE

 

Presto! If you only wanted to peek at the first few characters in the file, you’ve just saved 80% of the potential network traffic. And now that you have the first 5, you can cache those and only get the remaining bytes, concatenating the files together afterward:

 

GET /shortsample.txt
Range: bytes=5-

 

FGHIJKLMNOPQRSTUVWXYZ

 

The Range: way of retrieving small segments at a time over standard HTTP is key to modern video streaming (Range Retrieval has been standard for 20 years, since RFC 2616, so it predates HTML5 video itself but has come into its own more recently). It’s also how we get pause/resumeable file downloads.

 

So how can Range: create a lookup-able, webhook-compatible service out of a single file? For that, you have to get inside my brain in the wee hours a few weeks ago (I never sleep because I’m thinking of things like this!).

 

Imagine the byte position (an Integer) can be a lookup key, and the character at each position (a one-letter ASCII String) can be the value at that key.

 

An example will — well, hopefully — help.

 

Say leads coming into your instance have a numeric License Code between 0 and 49 (in reality the range could start anywhere, just simplifying for now). The code indicates which kind of products they’re legally allowed to buy. Only people with certain codes are allowed to do business with your company. And it’s a binary/boolean thing: either they’re Allowed or Blocked, no nuances.

 

Create a text file like this that includes all 50 codes, in order from 0, with “A” for Allowed and “B” for Blocked:

 

ABBBAAAABBBABABBBAABBBBBAAAAAABABAAABAABAABAAABABA

 

Now, you can get the single byte at their License Code value to check if they’re allowed or not:

 

GET /licensecodes.txt
Range: bytes=23-23

 

That is, in a Marketo webhook’s Set Custom Headers config, where you can use {{lead.tokens}}:

 

Range: bytes={{Lead.License Code}}-{{Lead.License Code}}

 

 

That will return the one-character string

 

A

 

or

 

B

 

for every lead.

 

Did you have an “A-ha!” moment? I hope so.

 

Step by step

In the above example, the possible License Codes conveniently started at 0 and were all filled in through 49.

 

Let’s think of a somewhat more complex case, where the only codes used by the licensing bureau are 10-20 and (for whatever reason) 30-50.

 

Not a problem. Simply pad the file with 0s wherever you don’t have a possible value (you could use spaces or underscores or whatever, but 0s are easier to read on the blog, methinks):

 

0000000000BABABBBAABB000000000BABAAABAABAABAAABABA 

 

Byte #10 — a.k.a. License Code #10 — is still “B” for “Blocked”. Everything still works!

 

Why bytes and not bits?

If you’re with me so far you might be thinking, Why is he wasting whole bytes on letters, when there are 8 individual bits to fiddle with? I talk about that in the notes.[8]

 

We’re still getting underway (sorry!)

I know you’ve scrolled a lot already, but alas we’re only partway there.

 

Now, let’s walk through something that’s closer to a real-world scenario (I mean, the license code thing isn’t really that artificial, but I already had this other example in mind).

 

Imagine you‘re a US-based company and you assign every US ZIP code a certain grade that reflects its suitability for your product (please don’t assume I did anything but randomly generate a letter A-F, I didn’t pay attention to actual geography at all!). It’s a form of scoring, in other words.

 

Now, there are in theory 100,000 5-digit ZIP codes (00000-99999). In reality, only half of them are allocated (some are simply not used yet, some are permanently reserved). Here’s the top of an Excel sheet with the ones in use and their state/territory, in ascending order from the first one allocated (00501) to the last (99950), with lots of gaps of course:

 

 

Yes, I’m quick to admonish people for treating alphanumeric strings (like ZIP codes or, for other examples, credit card numbers or phone numbers) as if they’re truly numbers. So I’m not suggesting under any circumstances that ZIPs should be stored as Integers in a database. But for today’s purpose, a ZIP is usable as a numeric index. So let’s convert column A from Text to Number:

 

 

Then get the Sales team involved and assign grades:

 

 

Remember the lesson from the second License Codes example above. If we’re going to seek a specific byte position (ZIP 00501 = number 501 = byte 501 ), then every byte position must be occupied in some way. (To use technical jargon, you can’t have a “sparse array”, it needs to be “dense”.)

 

So we fill in the gaps so numbers that don’t have a state/territory are still present. Here’s how that looks, zoomed out a bit:

 

 

Then put a dummy grade of 0 for all the unused slots:

 

 

Now, the only thing we care about is Column C. That’ll be our database (if you will) of grades indexed by ZIP.

 

From here, we’re going to head over to a text editor to avoid any chaff that might come in from Excel.

 

Here’s just Column C, with carriage returns and/or line breaks:

 

We’ve got to strip out those CRLFs! They might add 1 or 2 bytes, throwing us totally off. Here’s the start of the file without CRLFs:

 

And another view after scrolling right a bit, so you can see the real grades after the hundreds of zeroes that have to kick it off (ZIP 00000/byte 0 through ZIP 00500/byte 500 are unassigned):

 

And finally, a deep look using a hex editor. You can see that the 501st byte, for ZIP code 00501, corresponds to the grade “B” (ASCII 0x42).

 

As far as the webhook-compatible file is concerned, we’re good to go! Now upload that file to a server somewhere (I’m using my Amazon S3 account) and let’s switch back to our beloved Marketo UI.

 

The webhook definition

Fetching the custom-crafted textfile with a lead-specific Range: header is simple. The main screen in Admin:

 

And the Set Custom Header dialog:

 

As you can see, we fetch the single byte at index {{lead.Postal Code}}.

 

Running a lead through the webhook, we see the expected single-byte response F:

 

 

So far, we’ve had success. The Range: is customized for the lead and honored by the remote S3 server, and we have a clean response.

 

Only the response is maybe too clean.

 

We’re not quite there

Here’s the rub.  Though there’s no particular response format that’s a requirement for being webhook-compatible, Marketo can only automatically map response values to lead fields if the response is valid XML or JSON. (It actually doesn’t matter whether the HTTP Content-Type is set, by the way, and Marketo, please don’t change this!)

 

In this case, the response is a single ASCII byte, which is totally valid text/plain of course but isn’t valid XML or JSON. Marketo doesn’t have a buit-in way to say “Map the entire response payload back to such-and-such field” so we have to take a more clunky approach (don’t worry, later in this endless post we’ll learn how to fix the clunk).

 

That clunky way is to use the undervalued Webhook is Called trigger. Since we only have 6 possible values, it’s easy enough to manage. You just need 6 Smart Campaigns with symmetrical structure, and the Response constraint matches each single-byte plain-text response:

 


Fixing the clunk

To avoid building multiple campaigns and use a standard webhook Response Mapping, we have to expand on another techno-ontological-philosophical question: “How do you generate a JSON response?

 

You probably have 2 valid answers in mind:

 

1. You return a static .json file stored on disk (that is, the entire file).
2. You generate the JSON text in server-side code, making sure to set your Content-Type to application/json.

 

Those are common, but what about an uncommon 3rd option: you store multiple valid JSON responses, end-to-end, in a single .txt file and use Range: to choose which block of JSON to return?

 

The file as a whole isn’t valid JSON. But you’re only reading parts, and those parts are valid.

 

For example, take this file:

 

{ "First Name" : "Marcus" }
{ "First Name" : "Cleopatra" }

 

 

That's not JSON taken all together (don’t mistake it for an array of 2 objects, as it’s missing the all-important [] and  , so would never parse correctly).

 

But the 29 bytes of the first line alone (including the CRLF) would be a single valid JSON object. And the 32 bytes of the second line alone would also be valid JSON.

 

Right now, the lengths of the 2 blocks are irregular. And since we don’t know in advance how long a First Name might be, that won’t yet work. So instead (for the sake of simplicity) let’s say it’s a maximum of 9 single-byte letters long (C-l-e-o-p-a-t-r-a). And we pad each line to exactly 32 bytes using harmless whitespace.

 

Using the ◌ character (in this post, not in the real file) to literalize the spaces and line break characters, and counting off the 32 bytes at the bottom:

 

{◌"First Name"◌:◌"Marcus"◌}◌◌◌◌◌
{◌"First Name"◌:◌"Cleopatra"◌}◌◌
{◌"First Name"◌:◌"Bob"◌}◌◌◌◌◌◌◌◌
00000000011111111112222222222333
12345678901234567890123456789012

 

Now we’ve created a file full of fixed width structured data. It doesn’t have any inherent meaning to the server hosting the file – it’s just like any text file – but to a client (i.e. Marketo) that knows how to seek into it, it’s like a giant, albeit simple, database of JSON objects.

 

We’re almost, sort of, there

With the above file structure, we know the width of an object is always 32. So if we want the Nth object, we seek to the position (N × 32 - 1) – recalling that positions start from 0 – and read the next 32 bytes.

 

But that li’l bit of multiplication there (N × 32) isn’t something we can do without... calling another more sophisticated webhook! Which would  defeat the purpose of this post. (Which was what again? Ah, to show how with a lot of ingenuity, you can make useful webhook-callable endpoints without writing any code.)

 

So we need to change our fixed block width to, well I’ll just cut to the chase: a multiple of 10.

 

Why? Because you can denote 10-byte zero-based ranges without doing the multiplication yourself.

 

Huh? Think about “0-9" – the first range of 10 bytes. “10-19" is the next range. 20-29 is next. Each of these strings can be created with simple variable substitution:

 

bytes={{some variable}}0-{{some variable}}9

 

Or with 100-byte blocks:

 

bytes={{some variable}}00-{{some variable}}99

 

100-byte blocks, though, could end up wasting a ton of space. Let’s not do that unless we have to. Instead let’s tighten our internal JSON objects as much as we can (the file doesn’t need to be human-readable, after all). If we use a single-character key v and remove other whitespace, we can get each block down to 8 bytes with an empty string value:

 

{"v":""}

 

Thus leaving us the headroom for 2 glorious bytes of string data! Since the example application here is single-letter grades A through F for every Postal Code, that's just fine.

 

And that's how the “Book of JSONs” file is laid out:

 

 

Now we need only set up a Marketo webhook that seeks 10 bytes at a time, offset by the Postal Code:

 

And the 10-byte response is valid JSON, ready for a standard Response Mapping back to our lead field:

 

Also notice how the Call Webhook dutifully records the HTTP 206 (Partial Content) as a success. Which it is!

 

Oh, and one more thing you can do with this is...

 

... basic score arithmetic (well, addition at least)

Y’know how Marketo can’t add 2 numbers on its own?

 

Here’s a primitive addition table created in Excel (via auto fill, I ain't that crazy!):

 

The column numbers are the first addend; the row numbers are the second addend.

 

You must consider the rows to start at Row 0, not Row 1 (something that Excel doesn’t allow, to this programmer’s dismay!). Count the columns from 0 in the same way, rather than as letters, so Excel’s Column D is Column 3.

 

The cell where a row & column meet is the sum of those numbers. For example, Excel cell D7 in zero-based terms = Column 3, Row 6. The value of that cell is 9 - which is the sum of 3 and 6!

 

By applying the same JSON-10-byte-block approach used above for ZIP codes, you can preload this addition table into Marketo as a bunch of individual files. Not to worry, I created the first 1000 columns for you, giving you the ability to sum any 2 numbers between 0 and 999.

 

This is one case where Marketo Sky (which, like most of you, I stil use only occasionally) shines over the legacy UI, since you can drag-and-drop multiple files. So download this file:

 

rangehook_mathjson_files.zip

 

Then unzip it and upload the contents to Design Studio into an appropriate folder:

 

 

Now you’re ready to do some simple addition.

 

Maybe you want to add the {{Lead.Demographic Score}} and {{Lead.Firmographic Score}} together into {{Lead.Aggregate Score}}.

 

You’d set up a webhook with the following settings (I’m too exhausted for another round of screenshots, but I trust you to know what to do!):

 

 

Where 123-ABC-456 is your Munchkin ID and na-sj01 is your direct asset URL (if you’re on instance app-sj01.marketo.com then the URL is na-sj01.marketo.com, etc.). Don’t use your Marketo LP domain here, it won’t work.

 

  • Custom Header: Rangebytes={{Lead.Firmographic Score}}0={{Lead.Firmographic Score}}9

 

  • Response Mapping: vAggregateScore

 

When run, this webhook will do a request like this, if Demographic Score is 12 and Firmographic Score is 30:

 

GET /rs/123-ABC-456/images/add-to-12.json
Range: bytes=300-309

 

Bytes 300-309 of add-to-12.json are, conveniently:

 

Giving us the correct sum, 42.

 

I’ll say it again: it ain’t pretty, but it is predictable and accurate. And it doesn’t require any external services, just a willingness to play on the wild side.

 

It’s not just a file download, it’s an indexed lookup!

It’s easy to confuse a Range request with a primitive file download. But it’s not, and the proof is in the performance.

 

Seeking the 1 billionth byte of a 1 GB file should be no slower than seeking the 1st byte. I’ve run convincing benchmarks with S3 and with IIS. Other HTTPds might not be as efficient, but that’s an engineering problem with those engines.[9]

 

Who is this for?

Hopefully, this post was interesting to anyone curious about HTTP, JSON, bytes and ranges and all that stuff.

 

But, to be clear, I don’t expect this far-out take on webhooks to be put into production by a totally non-technical person. It’s more for someone who may have some coding experience (from a past job or school) but – like most marketers – doesn’t have a corporate-approved place to host server-side code.

 

Even if you’re up to the challenge of writing secure, scalable webhook-compatible code... your employer may be (should be!) rightly uncomfortable about you “just” spinning up a production service somewhere in your personal cloud.

 

So the next best thing, for certain kinds of data, is a zero-code webhook running right out of your Marketo instance. No compliance issues, no security issues, no worries.

 

Or just enjoy the post as a peek into my sleepless mind.


Notes

[1] Counterexample: One of our clients runs an FTP server that accepts uploaded CSVs. A back end process periodically reads rows from the FTP’d files and upserts leads into Marketo. The deeper back end connects to Marketo via HTTP, sure – but the front end FTP server ain’t webhook-compatible.

 

[2] A webhook-compatible service needs to allow authentication + authorization info (if any) to be passed in the same HTTP connection as the requested action. Don’t be distracted by doomed hacks like storing access_tokens in lead fields. Even when they sort-of-sometimes work, they’re adding statefulness. You can’t expect two invocations of the same webhook, even for the same record in your database, to know anything about each other.

 

Note this doesn’t mean the service itself can’t call stateful APIs from the back end: we do this all the time! But the first-hop connection, from the webhook to the service, is stateless. Any additional network hops are hidden from the webhook.

 

[3] The “requested action” means the single GET or POST from the Marketo-like app. But that doesn’t mean all related actions on the other side are completed in that same short period!

 

Take an SMS webhook: its requested action is enqueueing the outgoing SMS message within the provider’s infrastructure, not delivering the message to the recipient’s handset. (Let alone listening for 2-way responses, which is way outside of webhook-land.)

 

Similarly, a service that inserts rows into a remote database need not have finished committing and/or replicating data (making it readable by other apps) by the time it returns 200 OK to Marketo. It might complete the insert a few seconds (or even minutes) afterward. What’s important is that the payload is eventually stored, not that it’s stored in real-time.

 

On the other hand: when the service offers data enrichment, field calculation, or remote lookup tables, the requested action does mean finishing everything before sending the HTTP response (typically JSON or XML). So some enrichment apps, particularly those that try to combine data from multiple back-end services (with each of those next-hop requests possibly requiring multiple connections) can end up being unusable via webhooks: they may have the other requirements down, but not the performance.

 

[4] I mentioned a while ago that Twilio’s Lookup API can be used by a Marketo webhook because it supports Basic Auth (username/password) credentials carried along with a lookup request. So it’s compatible, even though it’s not advertised with the word “webhook”.

 

A basic HTML form’s action URL is typically webhook-compatible as well. By definition, that URL expects x-form-www-urlencoded keys and values in a single GET or POST. So as long as the webhook-enabled app supports Form/URL encoding (Marketo does) then you can post from the server side as easily as from the client. That’s why you can use a webhook to call Marketo’s scriptless forms endpoint  /index.php/leadCapture/save to do some cool cross-lead stuff.

 

(Yes, CSRF tokens break this compatibility.)

 

[5] In some cases, a database has a native HTTP/S endpoint so there wouldn’t technically be a different tier for code. SQL Server used to have an XML service inside it, for example, but that’s been removed.

 

Document databases typically have an HTTP API built in. But if they require multi-request OAuth authentication, that endpoint would end up incompatible with webhooks. Similarly, OData services are always web-compatible but not necessarily webhook-compatible.

 

[6] Where supported, you can ask for more than one range, but for simplicity let’s assume it’s only a single contiguous range and a standard (non-multipart) HTTP 206 response.

 

[7] Simple ASCII to avoid confusion about the length of a UTF-8 file with or without the 2-byte BOM.

 

[8] True, there’s no general reason why a webhook response can’t be treated as binary and then chopped down further, 8 individual bits instead of 1 ASCII character (and in turn mapped to 8 Boolean fields).

 

But specifically within Marketo, this isn’t supported. In order for Marketo to treat B as 01000010 and then map 01000010 back to this...

 

Lead.Field 1 = false
Lead.Field 2 = true
Lead.Field 3 = false
Lead.Field 4 = false
Lead.Field 5 = false
Lead.Field 6 = false
Lead.Field 7 = true
Lead.Field 8 = false

 

... you’d have to pass the original B response to a whole other webhook, and that webhook would need to have real code behind it (albeit simple) as opposed to a static file. So it would defeat the purpose of today’s no-code experiment.

 

[9] Using Range: against dynamically generated resources (not static files) can be sketchy and cost more in server resources than it saves in bandwidth. But that’s a different situation from this post. Imagine you have a service that dynamically creates image files with certain transforms (sepia, bunny face, whatever). It’s likely impossible to know what the 999th through 1000th bytes will contain in advance. So it needs to render the transformed image to disk or memory, then jump to the 999th byte. If the whole image might only be 1K, that’s really wasteful and it would be better if it didn’t accept Range requests at all.

Hello All! 

 

I'm on the Marketo product team at Adobe.  We are excited to take ABM to the next level and need your feedback.  Your input will ensure that we invest in areas that drive the most value for you.  You do not need to be an ABM customer to take the survey.  Please take a few minutes to respond and have your voice heard! 

 

ABM Survey Link: 

https://marketo.qualtrics.com/jfe/form/SV_0VdodK4krOf7dhX 


Look forward to your responses!

Liana 

 

In this edition of Marketo Master Class, we're teaming up with Marketo Champion Chris Wilcox to get into the weeds of Lead Scoring. Our aim was to break down the complexities around Lead Scoring and provide the Marketing Nation Community with actionable insights into best practices. Are you leveraging Lead Scoring in other innovative ways? Let us know in the comments!

 

1. What are the attributes of a successful lead scoring model? What are some factors that have resonated well with your Sales org?

 

I find the most important attributes of a successful scoring model are that they are both practical and scalable. From a practical perspective, your model needs to be designed for your business, your leads, and your pipeline. There is no one-size-fits-all lead scoring model. Of course, many concepts translate from business to business, but your organization needs to understand who are the best prospects for your product or service, and what actions those leads are taking that indicate that they may be primed for a sales conversation. Designing a model around your business will ultimately drive better success as you’ll be funneling the right leads to the right people.

 

Secondly, you can’t design a scoring model that requires every touch or action be monitored and scored by your marketing team. It needs to be designed and implemented in a way that works at any scale, and this can mean getting creative with the way your teams organize around certain engagements like conferences or live events. Making sure there is visibility into those actions in Marketo in as real-time as possible can be a (fun) challenge.

 

I have found that by including your sales organization’s leaders in the discussions around what factors feed the leads being passed over to your sales team can help build organic support from within. Many times lead scoring can feel like a black box from a sales perspective, but by bringing them into the conversation and revealing the “man behind the curtain” so to speak, it can really help them better understand that a lead scoring model’s purpose to put the right people in front of your sales organization at the right time to drive better success within their sales pipeline. Getting their perspective and input on what helps them do their job better is a great way to drive adoption and alignment between your marketing and sales organizations. 

 

2. What are some of the more sophisticated/non-conventional lead scoring strategies you have implemented in the past? 

 

Most lead scoring models are for identifying the best prospects, but an interesting use case is to build a scoring model for servicing existing customers.  To do this, build a scoring model to classify your existing customers using the relevant attributes just like you do for prospects. Contract size, subscription level, all of the quality indicators your firm uses to classify your customers. Another way to do this could be to work with your sales team to have them select target current customers that they want to keep a pulse on as well using a boolean field on the contact record that they can update in the CRM.

 

Bucket them into categories like low, medium, high value customers (get as granular as you’d like and is practical, I’ve seen upwards of 20+ levels of customer value J ). You can execute the classifications a few ways, by using a tiered score value (e.g. 10 for Low, 20 for Medium, 30 for High), creating a custom field to define with these smart lists, or even use a segmentation to maintain the contact’s category. Whichever process makes sense to you and for your instance.

 

In this example, I have a new score field of “Servicing Score” that will change based on the customer’s attributes. These smart campaigns would run periodically (weekly/monthly) to keep the score current.

 

Servicing Category Scoring Program Structure:

 

“High” Value Category Smart List:

 

“High” Value Score Change:

 

Servicing Score Token Values:

 

From there, I like to combine this with a custom field that date stamps a contact when Marketo sees logged sales email or phone calls with an existing customer. This can be tricky depending on how your sales team logs activities and how Marketo can interpret them. You may want to partner with your CRM admins if needed to get this field created and populated accurately.

 

Trigger to Populate “Last Contact Date”

 

Using these two things you can classify your existing customers into groups and overlay which customers have not has a sales contact in the last XX number days. Immediately that group of people (or at least the high-value subset) should be of interest to your sales team which you could communicate via alerts and/or Smart List subscriptions (or SFDC reports if your score values make it into your CRM!)

 

For the scoring piece, I like to combine the Servicing Score with a modified version of Engagement Score (webinar attendance, web visits, email clicks, etc.) to help the sales team identify a good time to reach out to that pool of customers. The reason I use a separate score value is that you will want to apply additional choice options on your change score flow steps for the servicing behavior based on the Last Contact Date which you wouldn’t want to do with your overall behavior score. You can build these right into the flow steps of your existing behavior score rules, and even use the same token values. You just add a choice based on your last contact date cutoff.

 

 

You might also have a window of time (90-180 days) where you watch for an activity to pass the lead over the sales, but then a cutoff where if the last contact date crosses you simply hand the lead off at that time.

 

From here, you have trigger programs watching for Service Behavior Score changes to contacts with a Service Score for whichever groups you want to include and either assign a task or push an alert to their sales rep for follow up.

 

 

 

This seems complicated, but it’s not!  Identify your best customers however you can, try to understand how long it’s been since a good sales contact has taken place, and watch for the activity of those customers to alert your sales reps. Also, whenever last contact date updates, make sure you’re resetting your Servicing Behavior Score value to =0!

 

To get started with something like this, you could do something as simple as watching for activity on things like the pricing page of your website, contract terms and conditions pages (if you have them) with your high-value clients to give your sales team some insight into that activity. You don’t have to start with the most complicated servicing model

 

3. What results did the above lead scoring models achieve that a standard model could not deliver?

 

Servicing lead scoring models can help your customer churn and retention rates, and give your sales team a leg up on taking care of clients that matter to your organization by systematically surfacing important customers that need a sales touch.

 

4. How do you strategically update your lead scoring model without having to reinvent the wheel every time? 

 

This all comes down to what attributes are delivering the best outcomes for the MQLs that are being handed off to sales. Make sure you’re properly populating acquisition programs to understand first touch attribution to identify the best lead sources.  I typically try to take a deeper dive into the best recent Close>Won opportunities to understand what about those opportunities made them such great wins (vertical? company size? industry?) to see if there are potential levers to pull to overweight those types of opportunities in the future, and to underweight those attributes that lead to more Close>Lose opportunities.

 

These changes should be small and incremental unless the current outcomes of the scoring model are extremely poor. We want to continue to push the right people down the funnel, and understanding what works and optimizing our scoring is the easiest way to do it, but we don’t want to constantly change who we’re feeding to sales without proper discussions and analysis. That can quickly lead to misalignment and confusion between marketing and sales.

 

5. How long does a lead scoring model need to be active to determine its success and what metrics do you consider?

 

I think this is entirely dependent upon the length of your organization’s sales cycle, but there are ways around completely succumbing to the (sometimes) lengthy cycles many organizations operate within.

 

For example, if your sales cycle takes 4-6 months, you might want to optimize your scoring to simply get more SALs instead of the best-case scenario of optimizing towards Close>Won opportunities. In most cases, it should take at least a few months to really prove to disprove the scoring model, but there are definitely cases where it should be shorter.

 

When evaluating the validity of your scoring model, a significant amount of analysis should be put into what activities are feeding the positive sales outcomes, which is where having a plan for attribution and properly ensuring acquisition programs are getting populated play a critical role in your ability to properly evaluate a scoring model.

 

6. When should you use a global vs. local program in your lead scoring strategy? 

 

In my experience, I have found using global lead scoring rules saves a ton of time and effort in the long-term from a maintenance perspective. Even if you have multiple scoring models in place in your instance, having those score tokens and trigger programs operating globally saves a ton of time when you want to make changes or adjustments to your scoring. Obviously, this can all be done at the local level, but there is a level of scalability and ease of maintenance of a global program structure that you can’t achieve with local program builds.

 

Typically, I see global scoring programs built off of program status change triggers, or using some interaction as a trigger point (fills out a form, visits key web page, etc.)

 

7. How do you leverage tokens to scale your lead scoring model? 

 

The biggest place where we leverage tokens is in the scoring values themselves. Simply to streamline the management of scoring change values for any given activity or engagement, creating all of those values as tokens puts all of your scoring values in a single place to manage and maintain which can save a ton of time in the long run, especially if you’re running multiple models or tweaking your scoring frequently.  I find it best to keep a single Scoring folder wherever you keep your Operational or Data Management campaigns and keep all of your scoring values in that parent folder.  This way, if you want to use the same score token across multiple scoring model programs, you can do so.

 

In the Servicing Score example above, you can reference the exact same token values for that score with no additional build, you’re just applying a choice to the change score to account for the servicing need.

 

 

 

8. Do you have any innovative plans for future lead scoring models?

 

The biggest innovations with lead scoring are all around predictive analytics and/or next-best-product type models. Maybe organizations are working with data teams to be able to better predict the next best product or offer for any given individual based on a variety of factors. You might have multiple demographic or quality scores running, one for each product or category your firm offers and having a model that would identify a contact’s likely best fit could drastically improve the quality of contacts that get handed off to sales as MQLs.

 

Say you have Badges Earned Custom Object ($BadgesEarned_cList in Velocity) to store a lead’s community achievements. Values are like so:

 

{description=Onboarding, points=200}
{description=Influencer, points=500}
{description=Helpful, points=200}
{description=Evangelist, points=350}

 

And you want to display the lead’s badges in a table with N alternating background colors. For example, with 3 alternating colors (leave aside the garish color scheme, that's not the point!):

 


This is is an old-school task that just about any template language can handle.[1] To start, we’ll stick with generic methods; later, we’ll see how Velocity’s Alternator helper class can save 1 or 2 lines of code.

 

As in other languages, alternation means a loop plus a modulo function (or native modulo operator[2]) which in Velocity is MathTool.mod:

 

#set( $rowColors = ["#ff4400", "#ccff00", "#0099cc"] )
#set( $numRowColors = $rowColors.size() )
#if( !$BadgesEarned_cList.isEmpty() )
<table style="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
#foreach( $badge in $BadgesEarned_cList )
#set( $rowColor = $rowColors[$math.mod($foreach.index, $numRowColors)] )
<tr bgcolor="${rowColor}" style="color:#333;">
<td>${badge.description}</td>
<td>${badge.points}</td>
</tr>
#end
</table>
#end

 

So I first set up an ArrayList, $rowColors, with the colors I want to alternate (in order from the top).

 

Then on every loop I get the modulo N of the loop index where N is the number of items in $rowColors.  (The list happens to have 3 items now, but the code dynamically adjusts if you add/remove colors.)

 

  • The first time through the loop, the loop index is 0.  0 modulo 3 is 0, so that means I use index 0 of $rowColors ($rowColors[0]) in turn.
  • Next time, the loop index is 1. 1 modulo 3 is 1. So $rowColors[1].
  • Next loop index is 2. 2 modulo 3 is 2: $rowColors[2].
  • Now the fun begins. Next time through the loop, the loop index is 3. 3 modulo 3 is 0. So we use $rowColors[0] again.
  • And that’s how alternating colors are done!

 

The HTML output is like so:

 

<tablestyle="border-top: 10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
<tr bgcolor="#ff4400"style="color:#333;">
<td>Onboarding</td>
<td>200</td>
</tr>
<tr bgcolor="#ccff00"style="color:#333;">
<td>Influencer</td>
<td>500</td>
</tr>
<tr bgcolor="#0099cc"style="color:#333;">
<td>Helpful</td>
<td>200</td>
</tr>
<tr bgcolor="#ff4400"style="color:#333;">
<td>Evangelist</td>
<td>350</td>
</tr>
</table>

 

Simplifying a bit with AlternatorTool

You’ve seen above that without any “alternator-aware” code you can get exactly the output you want.

 

Velocity does offer a cool tool that abstracts away the modulo stuff. But as you can see in the Alternator source, it uses exactly the same method, just as compiled Java:

 

 

An Alternator might be infinitesimally faster because it’s compiled, but you’d never notice this in reality. The reason to use Alternators is to save lines of code, and every line does count in a language as verbose as VTL. Here’s how to get the same output using an Alternator:

 

#set( $rowColors = $alternator.manual(["#ff4400", "#ccff00", "#0099cc"]) )
#if( !$BadgesEarned_cList.isEmpty() )
<table style="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
#foreach( $badge in $BadgesEarned_cList )
#set( $rowColor = $rowColors.getNext() )
<tr bgcolor="${rowColor}" style="color:#333;">
<td>${badge.description}</td>
<td>${badge.points}</td>
</tr>
#end
</table>
#end

 

16 lines instead of 17: yay! And a little easier to read, maybe.

 

Create an Alternator object by passing a List to $alternator.manual. Velocity then handles the modulo-based loop internally, whenever you call getNext().

 

(If you’re confused about the difference between auto and manual, I don’t blame you, but trust me that manual + getNext() is what you always want, especially because of the more advanced application we’re going to do next.)

 

Alternating between complex objects

Alternating between Strings (single hex colors like "#ff4400") is the simplest task.

 

But let’s say you want to vary the background and foreground (text) colors for optimal contrast:

 


Now, you’ve got a set of 3 “color schemes” and each scheme has 2 characteristics (background and foreground).  You should already be thinking: an array of objects!

 

And that’s exactly what I do here, passing an [] of {}s – an ArrayList of LinkedHashMaps, technically – to AlternatorTool:

 

#set( $rowColorSchemes = $alternator.manual([
{
"bg" : "#ff4400",
"fg" : "#fee"
},
{
"bg" : "#ccff00",
"fg" : "#333"
},
{
"bg" : "#0099cc",
"fg" : "#ccff00"
}
]) )
#if( !$BadgesEarned_cList.isEmpty() )
<table style="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
#foreach( $badge in $BadgesEarned_cList )
#set( $rowColorScheme = $rowColorSchemes.getNext() )
<tr bgcolor="${rowColorScheme.bg}" style="color:${rowColorScheme.fg};">
<td>${badge.description}</td>
<td>${badge.points}</td>
</tr>
#end
</table>
#end

 

Each HashMap has two keys, bg and fg. Notice I only call getNext() once per iteration to advance to the next object in the List.

 

The generated HTML:

 

<tablestyle="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
<tr bgcolor="#ff4400"style="color:#fee;">
<td>Onboarding</td>
<td>200</td>
</tr>
<tr bgcolor="#ccff00"style="color:#333;">
<td>Influencer</td>
<td>500</td>
</tr>
<tr bgcolor="#0099cc"style="color:#ccff00;">
<td>Helpful</td>
<td>200</td>
</tr>
<tr bgcolor="#ff4400"style="color:#fee;">
<td>Evangelist</td>
<td>350</td>
</tr>
</table>

 

 

That’s it for Alternators for today! But I have another post ready to go on how to combine Iterators and Alternators for some advanced fun. Stay tuned.

 

 

 

Notes

[1] Many template systems have macros for N = 2, like isOdd and isEven, built-in. Few offer anything as flexible as AlternatorTool.

 

[2] Indeed, Java operators like % are semi-supported in Velocity as well. But the VTL parser is much stricter than Java’s, leading to hard-to-debug problems. I always use the MathTool methods instead.

Many thanks to the 407 of you who took the time to complete our “Want To Help Us Make The Marketing Nation Community Even Better?” survey earlier this year! We are also incredibly grateful for those who talked to us 1-on-1 over the last couple of months.  

 

The feedback you provided was invaluable. It gave us meaningful insights into what you really want and need from your community, validated (and invalidated!) assumptions we had, and brought us interesting new ideas.

 

What were our key takeaways from your feedback? We learned that 1) more personalized content, 2) expert peer content, and 3) a simplified user interface should be our highest priorities for the next iterations of our Community. We also heard loud and clear that you want better search and that we have work to do around archiving out-of-date content.

 

We also learned some interesting things about you, our amazing Community members:

  1. Most of you have been using Community for 3-5 years
  2. You spend most of your time on Community learning from peers and experts
  3. 55% of you use Community 2-3 times a week or more
  4. 44% of you are most interested in getting answers to specific questions

 

So, what’s next? For starters, we have kicked off an UX design project, are engaging with a federated search vendor, and Jonathan Chen, your Community Manager, is putting together a plan for cleaning up Community content. Look for more updates from Jon as we get closer!

 

If you have additional feedback or comments, please feel free to reach out to me at dulsky@adobe.com or Jon at jonchen@adobe.com. I’m very excited to be partnering with you on the next phase for Community.

Filter Blog

By date: By tag: