Skip navigation
All Places > Products > Blog
1 2 3 Previous Next

Products

371 posts

Velocity does something magical when you just output a Lead field (without any other code at all).  The magic: it truly outputs the value as stored in Marketo. A {{lead.token}} doesn’t do that by default.

 

Community user AB wondered why a field containing an HTML table was revealing raw HTML (as text) in emails, as opposed to rendering the table.

 

That is, if a field looks like this in the UI:

 

 

Here’s what you see if you include {{lead.The Field With HTML}} in an email:

 

 

While you might have expected to see:

 

 

The reason you see the HTML-as-text is simple: Marketo HTML-encodes token values by default.

 

There’s nothing wrong, and a lot right, with this being the default behavior. It’s the same way that browsers deal with HTML-like text that’s not specifically inserted as HTML. To see what I mean, open a browser tab, go to  the Dev Tools Console, and run

 

document.body.insertAdjacentText("afterBegin","<table><tr><td>Stuff</td><tr></table>")

 

and you’ll see the code for a table, not a table!

 

In AB’s case the encoded value is:

 

&lt;table&gt;&lt;tr&gt;&lt;td&gt;Access Code&lt;/td&gt;&lt;td&gt;Remaining Uses&lt;/td&gt;&lt;td&gt;Start Date&lt;/td&gt;&lt;td&gt;End Date&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;hh7zh-lgyal&lt;/td&gt;&lt;td&gt;2.0&lt;/td&gt;&lt;td&gt;2018-12-31 08:00:00&lt;/td&gt;&lt;td&gt;2019-12-30 08:00:00&lt;/td&gt;&lt;td&gt;&lt;/table&gt;

 

Basic stuff, the < and > are encoded so the web browser/mail client won’t look for any deeper meaning in the text.

 

The old answer

The ready answer you might see for this is “Turn off HTML Encode Tokens in Field Management.”

 

I’ve given this answer myself. It works as a just-in-time fix. But I don’t feel it’s the right answer anymore, for multiple reasons:

 

  • you (the email author) may not have permission to make this change
  • if a field is used for multiple purposes, it may not be globally correct to leave the value unencoded
  • encoding is a secure default

 

The new answer

Using Velocity gives you local (per-email, even per-lead) control over whether the value is encoded.

 

To output a field without encoding, just create a {{my.token}} and drag that field to the canvas:

 

If you decide you do want to encode, Velocity has a method for that, EscapeTool.html:

 

${esc.html( $lead.TheFieldWithHTML )}

 

Now you’re in control of whether the output differs from the stored value.

 

Another related case

If you’re in the unlikely-but-not-unheard-of situation where an integration is plopping HTML into a field where you just want text, then it’s not just encoded vs. unencoded. You want the HTML tags outta there completely.

 

For that you use DisplayTool.stripTags:

 

${display.stripTags( $lead.TheFieldWithHTML )}

 

If you want to keep (whitelist) specific tags, pass all those tags as the 2nd+ arguments. With this VTL...

 

${display.stripTags( $lead.TheFieldWithHTML, "td", "tr" )}

 

... this original value...

 

<table>
<tr><td>Access Code</td><td>Remaining Uses</td><td>Start Date</td><td>End Date</td></tr>
<tr><td>hh7zh-lgyal</td><td>2.0</td><td>2018-12-31 08:00:00</td><td>2019-12-30 08:00:00</td></tr>
</table>

 

... gets output like so:

 

<tr><td>Access Code</td><td>Remaining Uses</td><td>Start Date</td><td>End Date</td></tr>
<tr><td>hh7zh-lgyal</td><td>2.0</td><td>2018-12-31 08:00:00</td><td>2019-12-30 08:00:00</td></tr>

 

Could be handy if you have a system that’s wrapping HTML in extra containers, disrupting your layout.

 

Note stripTags acts like textContent in a browser: all the matched opening and closing tags are removed, they’re not replaced by anything. So you might get unpretty output: <div>Sandy</div>Whiteman becomes SandyWhiteman (note the lack of space). Whether this works for your case depends on what’s coming in.

I obsess over questions like “What is a webhook?” like philosophers contemplate “What is consciousness?” (Or like others wonder “Are we merely 3-dimensional cartoons watched by N-dimensional beings?”)

 

It’s a different kind of search for truth, but better for your professional career.

 

  • Without the true (that is, RFC-based) answer to “What is a URL?” you’ll write a broken UTM parser.
  • If you use the term “CNAME” incorrectly, IT won’t take your requests seriously (or will botch the result).
  • If you don’t know what an SPF record can be, and when it, shall we say, ceases to be, your  deliverability will take a hit.
  • Without a (technical, not Shakespearean) dive into “What is a name?” you’ll mess up lots of things.

 

Standards, papers, and real-world experiences can reveal much wider definitions than you previously thought. The point of this post, for example, is that webhooks can take many more forms than you first imagine. And that should be exciting.

 

It starts from the start

The first mistake people make about webhooks is confusing (1) the webhook configuration (say, in Marketo Admin); (2) the webhook trigger (an event hook, which in Marketo means a triggered Smart Campaign); and (3) the remote web service that receives the webhook.

 

Far too frequently, (3) is mistakenly called the webhook. But it’s not.

 

You see, “the webhook” is the outbound HTTP request issued by a webhook-supporting app. It’s not the remote server that you connect to in hopes of a useful response – rather, it’s a feature of the platform that listens for the trigger. That is, for our purposes today, it’s a feature of Marketo.

 

Of course, without a webhook-compatible server on the other end, you won’t make successful use of webhooks! But you’re still sending a webhook, even if the other side is completely down.

 

The requirements for a server to be webhook-compatible are:

 

1. It runs over standard HTTP or HTTPS (that’s the “web” part). FTP servers, SMTP servers, servers that speak exotic binary protocols won’t cut it.[1]
2. It does not require multiple connections to perform a requested action. Webhooks are stateless. That means that OAuth-based systems requiring one HTTP connection to get an expiring token, followed by another connection do a lookup or update, aren’t webhook-compatible.[2]
3. It completes its requested action fast enough to handle invocation rates and within the HTTP timeout of the webhook side. “Fast enough” is a bit squishy because it depends on how many calls can be processed in parallel. Across the universe of webhooks, < 5s (including network time) is generally acceptable; < 1s is ideal. In Marketo specifically, < 30s is mandatory.[3]

 

Also note the creators of a service need not know it’s compatible to be compatible.[4]

 

The server doesn’t need to be a full-fledged service
You were supposed to notice something: the requirements above don’t include running a particular web language (PHP, Java, C#, JavaScript) nor any language at all!

 

That's because a webhook is still a webhook, even if it fetches static files – as long as the connection is over HTTP and finishes fast.

 

And a webhook need not fetch a different static file when called for different leads. Take a webhook you use to set default values. This could always get the same static JSON file defaults.json:

 

{
"FirstName" : "N/A",
"LastName" : "N/A",
"Company" : "[No Company Name]",
"Email" : "user@unknown.invalid"
}

 

This single ’hookable file serves a useful purpose, though it’s far from what people think of when you say “webhook”!

 

Taking it to the next level, you could use a set of static JSON files named by email address:

 

 

And a lead’s personal file could be dynamically fetched by the URL https://static.example.com/{{Lead.Email Address}}.json. For key accounts this would come in handy.

 

(You wouldn’t likely create tons of .json files by hand, in any case, but rather export them from a line-of-business app of some kind. The takeaway is that once static files are deposited on a webserver, no code needs to run to enrich a lead with their corresponding data.)

 

A static(ish) file that allows lookup

So far, the static file examples send back the  full contents of a static JSON file.

 

But what about doing a lookup within a large set of data to only return matches for the current lead?

 

That’s usually where people turn to a formal SQL or NoSQL database and, naturally, need to write the code to query that db. No longer is the webhook just fetching a prefab file with JSON. Instead, the webhook calls a web service, which is written in JavaScript, or C# or PHP or what-have-you. Then that service code queries the database using SQL or some other dialect, reading query info from the POST payload or from the GET query params.[5]

 

So when someone says, “Just use a webhook” (and I’m plenty guilty of this myself) unless they’re talking about calling an already-published service like Twilio SMS, they imply some kind of code. Perhaps simple code (to a developer’s eyes) but not no code.

 

But for certain types of data, it’s possible (and here is where this post starts to get strange) to create a webhook-compatible lookup service using only static files and standard HTTP headers. No code at all.

 

The key is HTTP Range (a.k.a. byte serving).

 

Homing in on Range:

The HTTP Range feature is amazingly ubiquitous: if you streamed a short video

today, you used it hundreds of times!

 

It works in such a simple way: you request a resource (URL) from a server and include a Range: header with a specific range of 1 or more bytes.[6]

 

The server, instead of responding with the entire resource, only sends you the bytes you requested.

 

So imagine you have this 26-byte, dirt-simple ASCII file shortsample.txt:[7]

 

ABCDEFGHIJKLMNOPQRSTUVWXYZ

 

And then you send this HTTP request:

 

GET /shortsample.txt
Range: bytes=0-4

 

The byte position always starts from 0. So this will get these first 5 bytes:

 

ABCDE

 

Presto! If you only wanted to peek at the first few characters in the file, you’ve just saved 80% of the potential network traffic. And now that you have the first 5, you can cache those and only get the remaining bytes, concatenating the files together afterward:

 

GET /shortsample.txt
Range: bytes=5-

 

FGHIJKLMNOPQRSTUVWXYZ

 

The Range: way of retrieving small segments at a time over standard HTTP is key to modern video streaming (Range Retrieval has been standard for 20 years, since RFC 2616, so it predates HTML5 video itself but has come into its own more recently). It’s also how we get pause/resumeable file downloads.

 

So how can Range: create a lookup-able, webhook-compatible service out of a single file? For that, you have to get inside my brain in the wee hours a few weeks ago (I never sleep because I’m thinking of things like this!).

 

Imagine the byte position (an Integer) can be a lookup key, and the character at each position (a one-letter ASCII String) can be the value at that key.

 

An example will — well, hopefully — help.

 

Say leads coming into your instance have a numeric License Code between 0 and 49 (in reality the range could start anywhere, just simplifying for now). The code indicates which kind of products they’re legally allowed to buy. Only people with certain codes are allowed to do business with your company. And it’s a binary/boolean thing: either they’re Allowed or Blocked, no nuances.

 

Create a text file like this that includes all 50 codes, in order from 0, with “A” for Allowed and “B” for Blocked:

 

ABBBAAAABBBABABBBAABBBBBAAAAAABABAAABAABAABAAABABA

 

Now, you can get the single byte at their License Code value to check if they’re allowed or not:

 

GET /licensecodes.txt
Range: bytes=23-23

 

That is, in a Marketo webhook’s Set Custom Headers config, where you can use {{lead.tokens}}:

 

Range: bytes={{Lead.License Code}}-{{Lead.License Code}}

 

 

That will return the one-character string

 

A

 

or

 

B

 

for every lead.

 

Did you have an “A-ha!” moment? I hope so.

 

Step by step

In the above example, the possible License Codes conveniently started at 0 and were all filled in through 49.

 

Let’s think of a somewhat more complex case, where the only codes used by the licensing bureau are 10-20 and (for whatever reason) 30-50.

 

Not a problem. Simply pad the file with 0s wherever you don’t have a possible value (you could use spaces or underscores or whatever, but 0s are easier to read on the blog, methinks):

 

0000000000BABABBBAABB000000000BABAAABAABAABAAABABA 

 

Byte #10 — a.k.a. License Code #10 — is still “B” for “Blocked”. Everything still works!

 

Why bytes and not bits?

If you’re with me so far you might be thinking, Why is he wasting whole bytes on letters, when there are 8 individual bits to fiddle with? I talk about that in the notes.[8]

 

We’re still getting underway (sorry!)

I know you’ve scrolled a lot already, but alas we’re only partway there.

 

Now, let’s walk through something that’s closer to a real-world scenario (I mean, the license code thing isn’t really that artificial, but I already had this other example in mind).

 

Imagine you‘re a US-based company and you assign every US ZIP code a certain grade that reflects its suitability for your product (please don’t assume I did anything but randomly generate a letter A-F, I didn’t pay attention to actual geography at all!). It’s a form of scoring, in other words.

 

Now, there are in theory 100,000 5-digit ZIP codes (00000-99999). In reality, only half of them are allocated (some are simply not used yet, some are permanently reserved). Here’s the top of an Excel sheet with the ones in use and their state/territory, in ascending order from the first one allocated (00501) to the last (99950), with lots of gaps of course:

 

 

Yes, I’m quick to admonish people for treating alphanumeric strings (like ZIP codes or, for other examples, credit card numbers or phone numbers) as if they’re truly numbers. So I’m not suggesting under any circumstances that ZIPs should be stored as Integers in a database. But for today’s purpose, a ZIP is usable as a numeric index. So let’s convert column A from Text to Number:

 

 

Then get the Sales team involved and assign grades:

 

 

Remember the lesson from the second License Codes example above. If we’re going to seek a specific byte position (ZIP 00501 = number 501 = byte 501 ), then every byte position must be occupied in some way. (To use technical jargon, you can’t have a “sparse array”, it needs to be “dense”.)

 

So we fill in the gaps so numbers that don’t have a state/territory are still present. Here’s how that looks, zoomed out a bit:

 

 

Then put a dummy grade of 0 for all the unused slots:

 

 

Now, the only thing we care about is Column C. That’ll be our database (if you will) of grades indexed by ZIP.

 

From here, we’re going to head over to a text editor to avoid any chaff that might come in from Excel.

 

Here’s just Column C, with carriage returns and/or line breaks:

 

We’ve got to strip out those CRLFs! They might add 1 or 2 bytes, throwing us totally off. Here’s the start of the file without CRLFs:

 

And another view after scrolling right a bit, so you can see the real grades after the hundreds of zeroes that have to kick it off (ZIP 00000/byte 0 through ZIP 00500/byte 500 are unassigned):

 

And finally, a deep look using a hex editor. You can see that the 501st byte, for ZIP code 00501, corresponds to the grade “B” (ASCII 0x42).

 

As far as the webhook-compatible file is concerned, we’re good to go! Now upload that file to a server somewhere (I’m using my Amazon S3 account) and let’s switch back to our beloved Marketo UI.

 

The webhook definition

Fetching the custom-crafted textfile with a lead-specific Range: header is simple. The main screen in Admin:

 

And the Set Custom Header dialog:

 

As you can see, we fetch the single byte at index {{lead.Postal Code}}.

 

Running a lead through the webhook, we see the expected single-byte response F:

 

 

So far, we’ve had success. The Range: is customized for the lead and honored by the remote S3 server, and we have a clean response.

 

Only the response is maybe too clean.

 

We’re not quite there

Here’s the rub.  Though there’s no particular response format that’s a requirement for being webhook-compatible, Marketo can only automatically map response values to lead fields if the response is valid XML or JSON. (It actually doesn’t matter whether the HTTP Content-Type is set, by the way, and Marketo, please don’t change this!)

 

In this case, the response is a single ASCII byte, which is totally valid text/plain of course but isn’t valid XML or JSON. Marketo doesn’t have a buit-in way to say “Map the entire response payload back to such-and-such field” so we have to take a more clunky approach (don’t worry, later in this endless post we’ll learn how to fix the clunk).

 

That clunky way is to use the undervalued Webhook is Called trigger. Since we only have 6 possible values, it’s easy enough to manage. You just need 6 Smart Campaigns with symmetrical structure, and the Response constraint matches each single-byte plain-text response:

 


Fixing the clunk

To avoid building multiple campaigns and use a standard webhook Response Mapping, we have to expand on another techno-ontological-philosophical question: “How do you generate a JSON response?

 

You probably have 2 valid answers in mind:

 

1. You return a static .json file stored on disk (that is, the entire file).
2. You generate the JSON text in server-side code, making sure to set your Content-Type to application/json.

 

Those are common, but what about an uncommon 3rd option: you store multiple valid JSON responses, end-to-end, in a single .txt file and use Range: to choose which block of JSON to return?

 

The file as a whole isn’t valid JSON. But you’re only reading parts, and those parts are valid.

 

For example, take this file:

 

{ "First Name" : "Marcus" }
{ "First Name" : "Cleopatra" }

 

 

That's not JSON taken all together (don’t mistake it for an array of 2 objects, as it’s missing the all-important [] and  , so would never parse correctly).

 

But the 29 bytes of the first line alone (including the CRLF) would be a single valid JSON object. And the 32 bytes of the second line alone would also be valid JSON.

 

Right now, the lengths of the 2 blocks are irregular. And since we don’t know in advance how long a First Name might be, that won’t yet work. So instead (for the sake of simplicity) let’s say it’s a maximum of 9 single-byte letters long (C-l-e-o-p-a-t-r-a). And we pad each line to exactly 32 bytes using harmless whitespace.

 

Using the ◌ character (in this post, not in the real file) to literalize the spaces and line break characters, and counting off the 32 bytes at the bottom:

 

{◌"First Name"◌:◌"Marcus"◌}◌◌◌◌◌
{◌"First Name"◌:◌"Cleopatra"◌}◌◌
{◌"First Name"◌:◌"Bob"◌}◌◌◌◌◌◌◌◌
00000000011111111112222222222333
12345678901234567890123456789012

 

Now we’ve created a file full of fixed width structured data. It doesn’t have any inherent meaning to the server hosting the file – it’s just like any text file – but to a client (i.e. Marketo) that knows how to seek into it, it’s like a giant, albeit simple, database of JSON objects.

 

We’re almost, sort of, there

With the above file structure, we know the width of an object is always 32. So if we want the Nth object, we seek to the position (N × 32 - 1) – recalling that positions start from 0 – and read the next 32 bytes.

 

But that li’l bit of multiplication there (N × 32) isn’t something we can do without... calling another more sophisticated webhook! Which would  defeat the purpose of this post. (Which was what again? Ah, to show how with a lot of ingenuity, you can make useful webhook-callable endpoints without writing any code.)

 

So we need to change our fixed block width to, well I’ll just cut to the chase: a multiple of 10.

 

Why? Because you can denote 10-byte zero-based ranges without doing the multiplication yourself.

 

Huh? Think about “0-9" – the first range of 10 bytes. “10-19" is the next range. 20-29 is next. Each of these strings can be created with simple variable substitution:

 

bytes={{some variable}}0-{{some variable}}9

 

Or with 100-byte blocks:

 

bytes={{some variable}}00-{{some variable}}99

 

100-byte blocks, though, could end up wasting a ton of space. Let’s not do that unless we have to. Instead let’s tighten our internal JSON objects as much as we can (the file doesn’t need to be human-readable, after all). If we use a single-character key v and remove other whitespace, we can get each block down to 8 bytes with an empty string value:

 

{"v":""}

 

Thus leaving us the headroom for 2 glorious bytes of string data! Since the example application here is single-letter grades A through F for every Postal Code, that's just fine.

 

And that's how the “Book of JSONs” file is laid out:

 

 

Now we need only set up a Marketo webhook that seeks 10 bytes at a time, offset by the Postal Code:

 

And the 10-byte response is valid JSON, ready for a standard Response Mapping back to our lead field:

 

Also notice how the Call Webhook dutifully records the HTTP 206 (Partial Content) as a success. Which it is!

 

Oh, and one more thing you can do with this is...

 

... basic score arithmetic (well, addition at least)

Y’know how Marketo can’t add 2 numbers on its own?

 

Here’s a primitive addition table created in Excel (via auto fill, I ain't that crazy!):

 

The column numbers are the first addend; the row numbers are the second addend.

 

You must consider the rows to start at Row 0, not Row 1 (something that Excel doesn’t allow, to this programmer’s dismay!). Count the columns from 0 in the same way, rather than as letters, so Excel’s Column D is Column 3.

 

The cell where a row & column meet is the sum of those numbers. For example, Excel cell D7 in zero-based terms = Column 3, Row 6. The value of that cell is 9 - which is the sum of 3 and 6!

 

By applying the same JSON-10-byte-block approach used above for ZIP codes, you can preload this addition table into Marketo as a bunch of individual files. Not to worry, I created the first 1000 columns for you, giving you the ability to sum any 2 numbers between 0 and 999.

 

This is one case where Marketo Sky (which, like most of you, I stil use only occasionally) shines over the legacy UI, since you can drag-and-drop multiple files. So download this file:

 

rangehook_mathjson_files.zip

 

Then unzip it and upload the contents to Design Studio into an appropriate folder:

 

 

Now you’re ready to do some simple addition.

 

Maybe you want to add the {{Lead.Demographic Score}} and {{Lead.Firmographic Score}} together into {{Lead.Aggregate Score}}.

 

You’d set up a webhook with the following settings (I’m too exhausted for another round of screenshots, but I trust you to know what to do!):

 

 

Where 123-ABC-456 is your Munchkin ID and na-sj01 is your direct asset URL (if you’re on instance app-sj01.marketo.com then the URL is na-sj01.marketo.com, etc.). Don’t use your Marketo LP domain here, it won’t work.

 

  • Custom Header: Rangebytes={{Lead.Firmographic Score}}0={{Lead.Firmographic Score}}9

 

  • Response Mapping: vAggregateScore

 

When run, this webhook will do a request like this, if Demographic Score is 12 and Firmographic Score is 30:

 

GET /rs/123-ABC-456/images/add-to-12.json
Range: bytes=300-309

 

Bytes 300-309 of add-to-12.json are, conveniently:

 

Giving us the correct sum, 42.

 

I’ll say it again: it ain’t pretty, but it is predictable and accurate. And it doesn’t require any external services, just a willingness to play on the wild side.

 

It’s not just a file download, it’s an indexed lookup!

It’s easy to confuse a Range request with a primitive file download. But it’s not, and the proof is in the performance.

 

Seeking the 1 billionth byte of a 1 GB file should be no slower than seeking the 1st byte. I’ve run convincing benchmarks with S3 and with IIS. Other HTTPds might not be as efficient, but that’s an engineering problem with those engines.[9]

 

Who is this for?

Hopefully, this post was interesting to anyone curious about HTTP, JSON, bytes and ranges and all that stuff.

 

But, to be clear, I don’t expect this far-out take on webhooks to be put into production by a totally non-technical person. It’s more for someone who may have some coding experience (from a past job or school) but – like most marketers – doesn’t have a corporate-approved place to host server-side code.

 

Even if you’re up to the challenge of writing secure, scalable webhook-compatible code... your employer may be (should be!) rightly uncomfortable about you “just” spinning up a production service somewhere in your personal cloud.

 

So the next best thing, for certain kinds of data, is a zero-code webhook running right out of your Marketo instance. No compliance issues, no security issues, no worries.

 

Or just enjoy the post as a peek into my sleepless mind.


Notes

[1] Counterexample: One of our clients runs an FTP server that accepts uploaded CSVs. A back end process periodically reads rows from the FTP’d files and upserts leads into Marketo. The deeper back end connects to Marketo via HTTP, sure – but the front end FTP server ain’t webhook-compatible.

 

[2] A webhook-compatible service needs to allow authentication + authorization info (if any) to be passed in the same HTTP connection as the requested action. Don’t be distracted by doomed hacks like storing access_tokens in lead fields. Even when they sort-of-sometimes work, they’re adding statefulness. You can’t expect two invocations of the same webhook, even for the same record in your database, to know anything about each other.

 

Note this doesn’t mean the service itself can’t call stateful APIs from the back end: we do this all the time! But the first-hop connection, from the webhook to the service, is stateless. Any additional network hops are hidden from the webhook.

 

[3] The “requested action” means the single GET or POST from the Marketo-like app. But that doesn’t mean all related actions on the other side are completed in that same short period!

 

Take an SMS webhook: its requested action is enqueueing the outgoing SMS message within the provider’s infrastructure, not delivering the message to the recipient’s handset. (Let alone listening for 2-way responses, which is way outside of webhook-land.)

 

Similarly, a service that inserts rows into a remote database need not have finished committing and/or replicating data (making it readable by other apps) by the time it returns 200 OK to Marketo. It might complete the insert a few seconds (or even minutes) afterward. What’s important is that the payload is eventually stored, not that it’s stored in real-time.

 

On the other hand: when the service offers data enrichment, field calculation, or remote lookup tables, the requested action does mean finishing everything before sending the HTTP response (typically JSON or XML). So some enrichment apps, particularly those that try to combine data from multiple back-end services (with each of those next-hop requests possibly requiring multiple connections) can end up being unusable via webhooks: they may have the other requirements down, but not the performance.

 

[4] I mentioned a while ago that Twilio’s Lookup API can be used by a Marketo webhook because it supports Basic Auth (username/password) credentials carried along with a lookup request. So it’s compatible, even though it’s not advertised with the word “webhook”.

 

A basic HTML form’s action URL is typically webhook-compatible as well. By definition, that URL expects x-form-www-urlencoded keys and values in a single GET or POST. So as long as the webhook-enabled app supports Form/URL encoding (Marketo does) then you can post from the server side as easily as from the client. That’s why you can use a webhook to call Marketo’s scriptless forms endpoint  /index.php/leadCapture/save to do some cool cross-lead stuff.

 

(Yes, CSRF tokens break this compatibility.)

 

[5] In some cases, a database has a native HTTP/S endpoint so there wouldn’t technically be a different tier for code. SQL Server used to have an XML service inside it, for example, but that’s been removed.

 

Document databases typically have an HTTP API built in. But if they require multi-request OAuth authentication, that endpoint would end up incompatible with webhooks. Similarly, OData services are always web-compatible but not necessarily webhook-compatible.

 

[6] Where supported, you can ask for more than one range, but for simplicity let’s assume it’s only a single contiguous range and a standard (non-multipart) HTTP 206 response.

 

[7] Simple ASCII to avoid confusion about the length of a UTF-8 file with or without the 2-byte BOM.

 

[8] True, there’s no general reason why a webhook response can’t be treated as binary and then chopped down further, 8 individual bits instead of 1 ASCII character (and in turn mapped to 8 Boolean fields).

 

But specifically within Marketo, this isn’t supported. In order for Marketo to treat B as 01000010 and then map 01000010 back to this...

 

Lead.Field 1 = false
Lead.Field 2 = true
Lead.Field 3 = false
Lead.Field 4 = false
Lead.Field 5 = false
Lead.Field 6 = false
Lead.Field 7 = true
Lead.Field 8 = false

 

... you’d have to pass the original B response to a whole other webhook, and that webhook would need to have real code behind it (albeit simple) as opposed to a static file. So it would defeat the purpose of today’s no-code experiment.

 

[9] Using Range: against dynamically generated resources (not static files) can be sketchy and cost more in server resources than it saves in bandwidth. But that’s a different situation from this post. Imagine you have a service that dynamically creates image files with certain transforms (sepia, bunny face, whatever). It’s likely impossible to know what the 999th through 1000th bytes will contain in advance. So it needs to render the transformed image to disk or memory, then jump to the 999th byte. If the whole image might only be 1K, that’s really wasteful and it would be better if it didn’t accept Range requests at all.

Hello All! 

 

I'm on the Marketo product team at Adobe.  We are excited to take ABM to the next level and need your feedback.  Your input will ensure that we invest in areas that drive the most value for you.  You do not need to be an ABM customer to take the survey.  Please take a few minutes to respond and have your voice heard! 

 

ABM Survey Link: 

https://marketo.qualtrics.com/jfe/form/SV_0VdodK4krOf7dhX 


Look forward to your responses!

Liana 

 

In this edition of Marketo Master Class, we're teaming up with Marketo Champion Chris Wilcox to get into the weeds of Lead Scoring. Our aim was to break down the complexities around Lead Scoring and provide the Marketing Nation Community with actionable insights into best practices. Are you leveraging Lead Scoring in other innovative ways? Let us know in the comments!

 

1. What are the attributes of a successful lead scoring model? What are some factors that have resonated well with your Sales org?

 

I find the most important attributes of a successful scoring model are that they are both practical and scalable. From a practical perspective, your model needs to be designed for your business, your leads, and your pipeline. There is no one-size-fits-all lead scoring model. Of course, many concepts translate from business to business, but your organization needs to understand who are the best prospects for your product or service, and what actions those leads are taking that indicate that they may be primed for a sales conversation. Designing a model around your business will ultimately drive better success as you’ll be funneling the right leads to the right people.

 

Secondly, you can’t design a scoring model that requires every touch or action be monitored and scored by your marketing team. It needs to be designed and implemented in a way that works at any scale, and this can mean getting creative with the way your teams organize around certain engagements like conferences or live events. Making sure there is visibility into those actions in Marketo in as real-time as possible can be a (fun) challenge.

 

I have found that by including your sales organization’s leaders in the discussions around what factors feed the leads being passed over to your sales team can help build organic support from within. Many times lead scoring can feel like a black box from a sales perspective, but by bringing them into the conversation and revealing the “man behind the curtain” so to speak, it can really help them better understand that a lead scoring model’s purpose to put the right people in front of your sales organization at the right time to drive better success within their sales pipeline. Getting their perspective and input on what helps them do their job better is a great way to drive adoption and alignment between your marketing and sales organizations. 

 

2. What are some of the more sophisticated/non-conventional lead scoring strategies you have implemented in the past? 

 

Most lead scoring models are for identifying the best prospects, but an interesting use case is to build a scoring model for servicing existing customers.  To do this, build a scoring model to classify your existing customers using the relevant attributes just like you do for prospects. Contract size, subscription level, all of the quality indicators your firm uses to classify your customers. Another way to do this could be to work with your sales team to have them select target current customers that they want to keep a pulse on as well using a boolean field on the contact record that they can update in the CRM.

 

Bucket them into categories like low, medium, high value customers (get as granular as you’d like and is practical, I’ve seen upwards of 20+ levels of customer value J ). You can execute the classifications a few ways, by using a tiered score value (e.g. 10 for Low, 20 for Medium, 30 for High), creating a custom field to define with these smart lists, or even use a segmentation to maintain the contact’s category. Whichever process makes sense to you and for your instance.

 

In this example, I have a new score field of “Servicing Score” that will change based on the customer’s attributes. These smart campaigns would run periodically (weekly/monthly) to keep the score current.

 

Servicing Category Scoring Program Structure:

 

“High” Value Category Smart List:

 

“High” Value Score Change:

 

Servicing Score Token Values:

 

From there, I like to combine this with a custom field that date stamps a contact when Marketo sees logged sales email or phone calls with an existing customer. This can be tricky depending on how your sales team logs activities and how Marketo can interpret them. You may want to partner with your CRM admins if needed to get this field created and populated accurately.

 

Trigger to Populate “Last Contact Date”

 

Using these two things you can classify your existing customers into groups and overlay which customers have not has a sales contact in the last XX number days. Immediately that group of people (or at least the high-value subset) should be of interest to your sales team which you could communicate via alerts and/or Smart List subscriptions (or SFDC reports if your score values make it into your CRM!)

 

For the scoring piece, I like to combine the Servicing Score with a modified version of Engagement Score (webinar attendance, web visits, email clicks, etc.) to help the sales team identify a good time to reach out to that pool of customers. The reason I use a separate score value is that you will want to apply additional choice options on your change score flow steps for the servicing behavior based on the Last Contact Date which you wouldn’t want to do with your overall behavior score. You can build these right into the flow steps of your existing behavior score rules, and even use the same token values. You just add a choice based on your last contact date cutoff.

 

 

You might also have a window of time (90-180 days) where you watch for an activity to pass the lead over the sales, but then a cutoff where if the last contact date crosses you simply hand the lead off at that time.

 

From here, you have trigger programs watching for Service Behavior Score changes to contacts with a Service Score for whichever groups you want to include and either assign a task or push an alert to their sales rep for follow up.

 

 

 

This seems complicated, but it’s not!  Identify your best customers however you can, try to understand how long it’s been since a good sales contact has taken place, and watch for the activity of those customers to alert your sales reps. Also, whenever last contact date updates, make sure you’re resetting your Servicing Behavior Score value to =0!

 

To get started with something like this, you could do something as simple as watching for activity on things like the pricing page of your website, contract terms and conditions pages (if you have them) with your high-value clients to give your sales team some insight into that activity. You don’t have to start with the most complicated servicing model

 

3. What results did the above lead scoring models achieve that a standard model could not deliver?

 

Servicing lead scoring models can help your customer churn and retention rates, and give your sales team a leg up on taking care of clients that matter to your organization by systematically surfacing important customers that need a sales touch.

 

4. How do you strategically update your lead scoring model without having to reinvent the wheel every time? 

 

This all comes down to what attributes are delivering the best outcomes for the MQLs that are being handed off to sales. Make sure you’re properly populating acquisition programs to understand first touch attribution to identify the best lead sources.  I typically try to take a deeper dive into the best recent Close>Won opportunities to understand what about those opportunities made them such great wins (vertical? company size? industry?) to see if there are potential levers to pull to overweight those types of opportunities in the future, and to underweight those attributes that lead to more Close>Lose opportunities.

 

These changes should be small and incremental unless the current outcomes of the scoring model are extremely poor. We want to continue to push the right people down the funnel, and understanding what works and optimizing our scoring is the easiest way to do it, but we don’t want to constantly change who we’re feeding to sales without proper discussions and analysis. That can quickly lead to misalignment and confusion between marketing and sales.

 

5. How long does a lead scoring model need to be active to determine its success and what metrics do you consider?

 

I think this is entirely dependent upon the length of your organization’s sales cycle, but there are ways around completely succumbing to the (sometimes) lengthy cycles many organizations operate within.

 

For example, if your sales cycle takes 4-6 months, you might want to optimize your scoring to simply get more SALs instead of the best-case scenario of optimizing towards Close>Won opportunities. In most cases, it should take at least a few months to really prove to disprove the scoring model, but there are definitely cases where it should be shorter.

 

When evaluating the validity of your scoring model, a significant amount of analysis should be put into what activities are feeding the positive sales outcomes, which is where having a plan for attribution and properly ensuring acquisition programs are getting populated play a critical role in your ability to properly evaluate a scoring model.

 

6. When should you use a global vs. local program in your lead scoring strategy? 

 

In my experience, I have found using global lead scoring rules saves a ton of time and effort in the long-term from a maintenance perspective. Even if you have multiple scoring models in place in your instance, having those score tokens and trigger programs operating globally saves a ton of time when you want to make changes or adjustments to your scoring. Obviously, this can all be done at the local level, but there is a level of scalability and ease of maintenance of a global program structure that you can’t achieve with local program builds.

 

Typically, I see global scoring programs built off of program status change triggers, or using some interaction as a trigger point (fills out a form, visits key web page, etc.)

 

7. How do you leverage tokens to scale your lead scoring model? 

 

The biggest place where we leverage tokens is in the scoring values themselves. Simply to streamline the management of scoring change values for any given activity or engagement, creating all of those values as tokens puts all of your scoring values in a single place to manage and maintain which can save a ton of time in the long run, especially if you’re running multiple models or tweaking your scoring frequently.  I find it best to keep a single Scoring folder wherever you keep your Operational or Data Management campaigns and keep all of your scoring values in that parent folder.  This way, if you want to use the same score token across multiple scoring model programs, you can do so.

 

In the Servicing Score example above, you can reference the exact same token values for that score with no additional build, you’re just applying a choice to the change score to account for the servicing need.

 

 

 

8. Do you have any innovative plans for future lead scoring models?

 

The biggest innovations with lead scoring are all around predictive analytics and/or next-best-product type models. Maybe organizations are working with data teams to be able to better predict the next best product or offer for any given individual based on a variety of factors. You might have multiple demographic or quality scores running, one for each product or category your firm offers and having a model that would identify a contact’s likely best fit could drastically improve the quality of contacts that get handed off to sales as MQLs.

 

Say you have Badges Earned Custom Object ($BadgesEarned_cList in Velocity) to store a lead’s community achievements. Values are like so:

 

{description=Onboarding, points=200}
{description=Influencer, points=500}
{description=Helpful, points=200}
{description=Evangelist, points=350}

 

And you want to display the lead’s badges in a table with N alternating background colors. For example, with 3 alternating colors (leave aside the garish color scheme, that's not the point!):

 


This is is an old-school task that just about any template language can handle.[1] To start, we’ll stick with generic methods; later, we’ll see how Velocity’s Alternator helper class can save 1 or 2 lines of code.

 

As in other languages, alternation means a loop plus a modulo function (or native modulo operator[2]) which in Velocity is MathTool.mod:

 

#set( $rowColors = ["#ff4400", "#ccff00", "#0099cc"] )
#set( $numRowColors = $rowColors.size() )
#if( !$BadgesEarned_cList.isEmpty() )
<table style="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
#foreach( $badge in $BadgesEarned_cList )
#set( $rowColor = $rowColors[$math.mod($foreach.index, $numRowColors)] )
<tr bgcolor="${rowColor}" style="color:#333;">
<td>${badge.description}</td>
<td>${badge.points}</td>
</tr>
#end
</table>
#end

 

So I first set up an ArrayList, $rowColors, with the colors I want to alternate (in order from the top).

 

Then on every loop I get the modulo N of the loop index where N is the number of items in $rowColors.  (The list happens to have 3 items now, but the code dynamically adjusts if you add/remove colors.)

 

  • The first time through the loop, the loop index is 0.  0 modulo 3 is 0, so that means I use index 0 of $rowColors ($rowColors[0]) in turn.
  • Next time, the loop index is 1. 1 modulo 3 is 1. So $rowColors[1].
  • Next loop index is 2. 2 modulo 3 is 2: $rowColors[2].
  • Now the fun begins. Next time through the loop, the loop index is 3. 3 modulo 3 is 0. So we use $rowColors[0] again.
  • And that’s how alternating colors are done!

 

The HTML output is like so:

 

<tablestyle="border-top: 10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
<tr bgcolor="#ff4400"style="color:#333;">
<td>Onboarding</td>
<td>200</td>
</tr>
<tr bgcolor="#ccff00"style="color:#333;">
<td>Influencer</td>
<td>500</td>
</tr>
<tr bgcolor="#0099cc"style="color:#333;">
<td>Helpful</td>
<td>200</td>
</tr>
<tr bgcolor="#ff4400"style="color:#333;">
<td>Evangelist</td>
<td>350</td>
</tr>
</table>

 

Simplifying a bit with AlternatorTool

You’ve seen above that without any “alternator-aware” code you can get exactly the output you want.

 

Velocity does offer a cool tool that abstracts away the modulo stuff. But as you can see in the Alternator source, it uses exactly the same method, just as compiled Java:

 

 

An Alternator might be infinitesimally faster because it’s compiled, but you’d never notice this in reality. The reason to use Alternators is to save lines of code, and every line does count in a language as verbose as VTL. Here’s how to get the same output using an Alternator:

 

#set( $rowColors = $alternator.manual(["#ff4400", "#ccff00", "#0099cc"]) )
#if( !$BadgesEarned_cList.isEmpty() )
<table style="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
#foreach( $badge in $BadgesEarned_cList )
#set( $rowColor = $rowColors.getNext() )
<tr bgcolor="${rowColor}" style="color:#333;">
<td>${badge.description}</td>
<td>${badge.points}</td>
</tr>
#end
</table>
#end

 

16 lines instead of 17: yay! And a little easier to read, maybe.

 

Create an Alternator object by passing a List to $alternator.manual. Velocity then handles the modulo-based loop internally, whenever you call getNext().

 

(If you’re confused about the difference between auto and manual, I don’t blame you, but trust me that manual + getNext() is what you always want, especially because of the more advanced application we’re going to do next.)

 

Alternating between complex objects

Alternating between Strings (single hex colors like "#ff4400") is the simplest task.

 

But let’s say you want to vary the background and foreground (text) colors for optimal contrast:

 


Now, you’ve got a set of 3 “color schemes” and each scheme has 2 characteristics (background and foreground).  You should already be thinking: an array of objects!

 

And that’s exactly what I do here, passing an [] of {}s – an ArrayList of LinkedHashMaps, technically – to AlternatorTool:

 

#set( $rowColorSchemes = $alternator.manual([
{
"bg" : "#ff4400",
"fg" : "#fee"
},
{
"bg" : "#ccff00",
"fg" : "#333"
},
{
"bg" : "#0099cc",
"fg" : "#ccff00"
}
]) )
#if( !$BadgesEarned_cList.isEmpty() )
<table style="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
#foreach( $badge in $BadgesEarned_cList )
#set( $rowColorScheme = $rowColorSchemes.getNext() )
<tr bgcolor="${rowColorScheme.bg}" style="color:${rowColorScheme.fg};">
<td>${badge.description}</td>
<td>${badge.points}</td>
</tr>
#end
</table>
#end

 

Each HashMap has two keys, bg and fg. Notice I only call getNext() once per iteration to advance to the next object in the List.

 

The generated HTML:

 

<tablestyle="border-top:10px solid #aaa;">
<tr bgcolor="#fff">
<td>Activity</td>
<td>Points Earned</td>
</tr>
<tr bgcolor="#ff4400"style="color:#fee;">
<td>Onboarding</td>
<td>200</td>
</tr>
<tr bgcolor="#ccff00"style="color:#333;">
<td>Influencer</td>
<td>500</td>
</tr>
<tr bgcolor="#0099cc"style="color:#ccff00;">
<td>Helpful</td>
<td>200</td>
</tr>
<tr bgcolor="#ff4400"style="color:#fee;">
<td>Evangelist</td>
<td>350</td>
</tr>
</table>

 

 

That’s it for Alternators for today! But I have another post ready to go on how to combine Iterators and Alternators for some advanced fun. Stay tuned.

 

 

 

Notes

[1] Many template systems have macros for N = 2, like isOdd and isEven, built-in. Few offer anything as flexible as AlternatorTool.

 

[2] Indeed, Java operators like % are semi-supported in Velocity as well. But the VTL parser is much stricter than Java’s, leading to hard-to-debug problems. I always use the MathTool methods instead.

Many thanks to the 407 of you who took the time to complete our “Want To Help Us Make The Marketing Nation Community Even Better?” survey earlier this year! We are also incredibly grateful for those who talked to us 1-on-1 over the last couple of months.  

 

The feedback you provided was invaluable. It gave us meaningful insights into what you really want and need from your community, validated (and invalidated!) assumptions we had, and brought us interesting new ideas.

 

What were our key takeaways from your feedback? We learned that 1) more personalized content, 2) expert peer content, and 3) a simplified user interface should be our highest priorities for the next iterations of our Community. We also heard loud and clear that you want better search and that we have work to do around archiving out-of-date content.

 

We also learned some interesting things about you, our amazing Community members:

  1. Most of you have been using Community for 3-5 years
  2. You spend most of your time on Community learning from peers and experts
  3. 55% of you use Community 2-3 times a week or more
  4. 44% of you are most interested in getting answers to specific questions

 

So, what’s next? For starters, we have kicked off an UX design project, are engaging with a federated search vendor, and Jonathan Chen, your Community Manager, is putting together a plan for cleaning up Community content. Look for more updates from Jon as we get closer!

 

If you have additional feedback or comments, please feel free to reach out to me at dulsky@adobe.com or Jon at jonchen@adobe.com. I’m very excited to be partnering with you on the next phase for Community.

Conner Hatfield

ABM Account Scoring

Posted by Conner Hatfield Employee Jul 9, 2019

***Posted on behalf of Tallie Belitz, Senior Manager of Sales & Marketing Operations at Kollective Technology.***

 

Marketo’s Account Based Marketing module is a powerful tool, but it can be intimidating if you don’t have a strategic plan to drive value from  it.

The key ABM insights lie in the Account Score, but there are a few things you need to set up first to ensure your Account Score is truly including everyone in your database associated with that company. Follow these five easy steps below:

  1. Start with a solid lead scoring model. Begin with the Definitive Guide to Lead Scoring to implement best practices. 
  2. Develop your Ideal Customer Profile (ICP) and create a Target Account List. We kept it simple and started with just 3 key characteristics:
  • Number of Employees
  • Geography
  • Industry
  1. Create a Smart List for your Target Account List. This list may take some time and focus to create, but it can be referenced in nearly all your campaigns, so it’s important that it is accurate. Be sure to include as many variables as possible to capture variations on the company name, but excludes similar names.

4. Build out your Named Accounts in the Account Based Marketing Module. The companies in your Target Account List may already exist in your CRM, but some may not. To add them to the ABM module, look for them first in the Discover CRM Accounts section. If you don’t find them there, search for them in the Discover Marketo Companies section, and add them.

5. Make fuzzy logic crystal clear! Marketo Lead-to-Account matching uses key information on the lead record, such as email domain, inferred company name from IP address and company name (learn more about this process in the Marketo Product Docs). This feature associates most people from that company to that account, but does not always capture everyone. If the company has multiple domain names or the person uses their personal email address they may not be automatically associated with that account. It’s imperative to associate everyone to the appropriate account to get an accurate Account Score. There are two things you can do to make sure this happens:

 

Option 1: 

  • Under the Named Account tab, click on the specific named account and navigate to the Potential People tab. This is where you find weak matches associated with the account. After analyzing the potential people, click the person or people you would like to add to the named account and click Add People. 

Option 2: 

  • Look at your Target Account List to see if anyone is missing. 
  • First create a customized view that includes the Named Account field.
  • Sort by Company name and look for any blanks in the Named Account field.

  • Highlight the names, then under Person Actions, select Marketing, and choose Add to Named Account.


With everyone associated with a company in the Named Account, you will have an accurate and actionable Account Score. From here you can begin monitoring the scores at a regular cadence to look for increases in activity at the account level, not just the isolated lead score for each person.

With Account Scoring set up, you can now unlock the power of ABM!

The post title is a bit of a mouthful, but if you've been bitten by a certain feature gap you'll know what I mean.

 

One of the first things you learn about Wait steps is they don't have a literal Add Choice option.

nelson pointing at marketo

 

This can be frustrating when you want to vary the Wait delay based on  runtime conditions (that is, conditions you can’t know until the person has qualified and entered the flow) importantly including no delay at all.

 

But with a tiny bit of work, you can simulate Wait step choices.

 

It’s a matter of managing a Date/DateTime field, earlier in the same flow, using Marketo’s simple plus/minus support.

 

Here’s such a field:

 

field mgmt

 

And here’s a flow that uses that field to manage a subsequent Wait step:

 

flow steps

 

This approach works because of 3 convenient truths:

 

  • Change Data Value is synchronous within a single flow[1]
  • Date tokens understand a few math operators
  • Wait steps using a Date token will be skipped if the Date token is empty

 

Truth be told, I don't always endorse this tack over multiple Smart Campaigns. Whatever’s making you want drastically variable Wait periods (other than implicitly variable periods like wait-until-anniversary) may mean the lifecycle is going to differ in other ways as well, in which case discrete SCs help you keep your sanity. But it's there if you want it.

 

P.S. Yes, you can also use a Number {{my.token}} for the delay itself, a setup that might be almost too cool to follow! Or a Text {{my.token}} (don’t know why you’d choose this over Number, though) as long as you don’t include the unit  (“days”, “hours”) in the value, keep that hard-coded in the Change Data Value box.

 




Notes
[1] “Synchronous” meaning the New Value is guaranteed to be readable in the next flow step. Contrast this with, for example, webhook-based updates, which are asynchronous (background) value changes.

Marketing Nation members, I’m thrilled to announce a new addition to my team, Jonathan Chen. Jon will be taking lead as Community Manager and, as such, will be working with all of you to ensure our Community remains the best community ever!

 

Here’s Jon’s message to you:

 

“Hello Marketing Nation,

 

My name is Jonathan, and I joined Marketo last week as a Community Manager for the Marketing Nation Community. I am so excited and grateful to be a part of such an innovative and quirky team at Marketo and can't wait to engage you all with interesting topics and ideas to further expand your knowledge of Marketo. Before joining this team, I was a product marketer at a prominent enterprise headset company, where I helped expand its innovation business by developing scalable marketing programs and communities to drive awareness and advocacy. I often used Marketo in my previous job to develop monthly consumer newsletters and quickly fell in love with the highly sophisticated, yet surprisingly intuitive platform. Of course, I learned that Marketo has countless solutions for every use case and was blown away by the Community's respectful users, vast database of creative ideas, and snappy responses to even the most complex of questions. The Marketing Nation is truly one of a kind, and it's because of every single one of you. From the daily posters to the occasional lurkers, know that I am here to support you on your Marketo journey. There are many great projects planned for the Community, and I can't wait to show you what's in store in the coming months. Please feel free to reach out to me at jonchen@adobe.com any time you have a question, complaint, or concern - I'm here to help!”

 

Jonathan Chen, Marketo Marketing Nation Community Manager@@@

 

Please join me in welcoming Jon to the most amazing nation of marketers on earth!

 

Janet

I recently received a request from a customer who was trying to update a person's email address via the API. He was trying to use the Create and Update Lead API, and because the lead had a new email address, he was getting a duplicate person record created with the new email address. What he wanted was to update the existing Person record with the new email address instead.

 

 Since I have had this question a few times, I thought it would be helpful to write a blog post about it.  The steps to follow are below, but...spoiler alert....the answer is to use the Sync Leads Using Post endpoint instead.

 

 In order to update the right Person record, I'll need the Marketo Person ID for that person.  I can get that using the Get Leads By Filter Type endpoint like this: 

 

 

Once I have the id of the Person record, I can call the Sync Leads Using Post endpoint to update that record. As you can see from the screenshot below, I’m passing in an action of “updateOnly” and  “id” for the “lookupField” value. Then I can pass the information in the “input” that I want to be updated; namely email in this case.

 

To verify that the email address was changed, I can make the  Get Leads By Filter Type call again and pass the new email address. 

 

Finally, there are several ways to get the person id that you’ll need to do this with.  You can either do a Bulk Lead Export if there are a lot of them or you can call Get Leads by Filter Type if it's a one-off, and you need to look it up by email address. 

 

Many of our customers have complex business requirements that require more than what the baseline Marketo data model can support. That's why we have Custom Objects in Marketo. Custom Objects allow the customer to support more advanced data models in Marketo, but there are some best practices to follow when using them.

 

Measure Twice. Cut Once

 

When you first design your Custom Object, we advise to take your time and get all of the fields correctly defined before you approve the object. The reason for this is data contiguity. When you approve the custom object, Marketo creates a table in the database for that object definition.

 

If, at a later time, you decide that you need another one or more columns, and you simply add them and approve the object again, the additional columns will not be stored in the same table as was created in the original object definition. Going forward, you'll have data in two different tables for that object, which will require Marketo to use a join whenever it access that object. This can have an adverse effect on the speed of data retrieval from Marketo instance, impacting script execution, data access via the API, and even the user experience in the Marketo web UI.

 

As an alternative, you could always drop and recreate the Custom Object with the correct columns, but then you would have to migrate the data from the old Custom Object to the new one which adds a level of complexity that most of our customers don't want to get into. The best option is to get it right the first time; like woodworkers say: "Measure twice. Cut Once."

 

Recreating a Custom Object

Just in case you find yourself in a position to need to rebuild your Custom Object, here are the steps you'll need to follow to do it correctly: 

  • Stop any third-party integrations that are using the Custom Object via the API.
    • This will prevent any third-party systems from trying to access the Custom Object via its API while you're rebuilding the object.
  • Remove all references to that object in filters, flow steps, smart lists etc. 
    • This will prevent any trigger campaigns from executing based on your work.
  • Write down the exact spelling of the Custom Object.
    • So you can recreate it later
  • Delete the Custom Object 
    • Depending on how many rows of data are in the Custom Object, you may have to ask support to do this.
  • Wait for about 15-30 minutes to allow Marketo to confirm that the Custom Object is fully deleted
    • This ensures you can recreate the object with the same API name.
    • If you just delete and recreate the object with the same API name, the recreate can fail.

 

Don't Use Custom Objects as a Data Land Fill

Many of our customers do regular bulk exports of their data for storage in a data warehouse of some other third-party analytic system. If your Marketo instance is full of Custom Object data that is not, and never will be useful for that analysis, your exports will take longer and have a more significant effect on your API rate limits. 

 

To avoid allowing your Marketo instance to fill up over time with useless data, it's best to make sure to use your Custom Objects to store only data that is useful now, or in the future. Things you should not store in a Custom Object include:

  • Transient information  - Activity data should be stored in a Custom Activity instead
  • Order history - Consider purging old data from time to time
  • Any data that will not be used for marketing activities

 

Deletion of Custom Object Data

If, after following the steps above, you still need to purge data from your Custom Objects you can do this through the API. Use the Delete Custom Objects endpoint, and remember to batch your calls in order to do up to 300 rows at a time. This will save you from using up your API rate limit.

Email marketing is the foundation of a lot of marketing strategies. With the help of Marketo Champion Beth Massura, this edition of Marketo Master Class puts a magnifying glass on the role that tokens, templates, and snippets play in the creation of a marketing email to help you achieve actionable insights and greater efficiency.

 

  1.     What are the benefits of using tokens, snippets, and templates in emails?

 

There are a few different benefits to using these features. First, tokens, snippets, and templates provide consistency across email communications, which is really important from a brand standpoint. We want the recipients to recognize the emails as coming from a single organization, and having standardized templates and snippets helps with this.

These elements also make it a lot easier to manage global changes. I will never forget when we had to manually update our social media profile links across 400+ HTML email files (in a pre-Marketo system) twice within just a few months. What a pain! Now we can just update a small number of snippets and the updates will appear in all the email assets using them.

 

Templates can help increase confidence that the emails will render well. If the templates have been thoroughly tested across dozens of recipient browser/email client/device combinations, and users are working within the templates, it may be sufficient to test each individual email message across a smaller set of recipient setup variations instead of all of them.

 

Tokens, snippets, and templates also help users create emails more easily. It can be overwhelming to start from scratch every time, and would likely lead to errors. We have the most popular modules and snippets appearing by default in the template, so there is less work for users to set up their email.

 

  1.     What are different use cases for using snippets? 

 

Snippets are good for any block of content that you want to have standardized, but with multiple variations that can be substituted for one another: branding elements, contact information, executive bios, common product descriptions or disclosures, etc. You can set areas in the template that can be replaced only with snippets, or users can replace rich text areas with snippets.

 

We use snippets for brand lockups at the top of the email template as well as for the email footer content such as legally required links and information. We have a couple dozen different departments, programs, and research centers sharing our Marketo instance, and many of them have a variation of the main logo (for the header) as well as their own contact information (for the footer). The central marketing team manages all snippets in the Default workspace, organized into subfolders for each department. These subfolders are then shared to the respective departmental workspaces so they have access to only the header and footer snippets that they should use, and they don’t have access to modify any of the snippets themselves.

 

 

For the header logos, previously our central designers and developers edited the rich text section of an email asset to drop in a logo with the right dimensions and styling and just hoped that the group would remember to clone that particular asset going forward. Unfortunately this meant that if the group cloned the wrong email asset, they’d have to reach out to the designers/developers to replace the logo again. There were also a few users who deleted or replaced the logo in the rich text editor with one that didn’t meet brand guidelines. Offering a snippet “library” of the approved logos/lockups makes it easy for the user to select the one that’s appropriate for the context while maintaining the brand standards. We have set guidelines for the logo sizing, etc., so having the snippets centrally managed helps with that as well.

 

Example header snippet:

 

Additional header snippets are created for logo variations such as these:

 

Our footer snippets contain the contact information for the respective group, as well as the mandatory unsubscribe, preference, and privacy policy links for compliance and user experience purposes. Because much of this information is required by law, it’s imperative that we ensure all these details appear correctly on every single email. Using snippets helps us keep this consistent and avoid variance from the standard.

 

Example footer snippet:

 

 

  1.     What are some under-utilized tokens? How do you leverage program-level vs. folder-level tokens differently?

 

We use a token for the copyright year at the bottom of the email. Previously we had it as a text token on the top folder level for each workspace; we just had to update the values at the start of the year. But now we use an email script token (also in the top level folder) so the year is automatically updated:

#set($timeZoneObject = $date.getCalendar().getTimeZone())$date.format("yyyy", $date.getDate(), $date.getLocale(), $timeZoneObject.getTimeZone("CST")) 

 

It’s a system token rather than a program token, but program ID is added in small text at the very bottom of the email footer snippets. We found this would help us locate the associated program if we were forwarded an email and asked to troubleshoot why a person did/didn’t receive it or to create a similar email.

 

You would want to use folder-level tokens for items that should appear in every single program, such as copyright year. We also have a folder-level token for a tracking code with a default value that is then updated on each program; it is referenced at the end of every url in the email to tie into our web metrics system. We want a tracking code for every program, so even though the value has to be updated on each program, having the token there on the program by default is a good reminder to the user.

 

In the below example, the highlighted “20190610-WEB sample program” inherited tokens from the “BethM Marketing Activities” and “Web Content Programs” folders. It will not inherit tokens created on “20190615-EM sample program”.

 

Program-level tokens can be useful when it’s relevant only to a specific program type, such as “event registration url” for an event program. Because folder-level tokens are inherited automatically by all programs within that folder, it wouldn't make sense to have this type of token in a folder. If you did, it may appear in an unrelated newsletter program through token inheritance. While tokens can be deleted in a program, you can't re-inherit a token once it's been removed

 

  1.     How do you use tokens, snippets, and templates together to fit within a larger strategy? 

 

Tokens, snippets, and templates can be integral parts of a Center of Excellence! We created an all-in-one module-based email template to accommodate a wide variety of layouts for all kinds of emails: newsletters, event invitations, announcements, etc. The template also includes the two areas for the header and footer snippets as well as the tokens as mentioned previously. This template is then used for the email assets in our cloneable standard email send program and the nested email sends within an event program.



  1.     What are the most common pitfalls you see when people are trying to use tokens, snippets, and templates?

 

Make sure you involve stakeholders at the beginning of any initiative to develop standardized tokens, snippets, and/or templates - which ideally is part of the Marketo implementation - in order to gather all requirements before anything is executed. You don’t want to waste time developing a template module that no one will ever use, or to miss one that would be used frequently. While a template can be edited, it won’t push those updates to email assets already created from the template.

 

Creating a template involves HTML/CSS coding and Marketo-specific syntax to define editable areas. Make sure you are working with a developer who is familiar with this. Having HTML/CSS alone won’t allow users to edit an email asset created from the template.

 

All snippets to which a user has access will show up in the Insert Snippet dropdown menu; you can’t select which snippets will be options for area A vs. area B. We use naming conventions to distinguish between header and footer snippets.

 

 

When making changes to a snippet, be sure to update the text version as well. Unlike an email asset, it won’t carry over the edits.

 

Consider whether a piece of information would be best served within a token or something else, like a variable within a template module or even just typing the content directly into the email asset. It can be confusing to enter some email content in tokens on the program level and then other pieces within the email editor. Tokens might be preferable when the content is listed in multiple places and is likely going to change, so all references can be updated at once.

 

It’s definitely possible to create an email asset that solely references program tokens so you don’t have to go into the email editor at all to customize images, colors, text, and links. This “Mad Libs” method can work well for emails that are straightforward and standard in format, such as form submission confirmations. But the moment you want to add in something special, you’re going to have to enter the email editor anyway. And the tokenized method may not be as intuitive to those users who would be more comfortable entering/selecting the elements in a WYSIWYG context. (We are not currently using the “Mad Libs” method, but one of our departments did in the past.)

 

Example tokens for a fully tokenized email:

 

The associated email asset’s editor view; it isn’t pretty!:

 

The preview of the above email asset:

 

  1.     Are you planning to try anything new with these features?

 

One next step could be for us to put dynamic content within our snippets. For example we have regional campuses/offices for some of the school’s departments. Instead of having separate snippets for each region, we could have the snippet dynamically populate the regional address based on the recipient’s region segment.

 

 

_________________________

 

You can find more insightful content in the June edition of The Fearless Forum!

Hi Marketing Nation! As you may have noticed, the look and feel of content within Community has changed. These changes were made during the release of a new build last week. Unlike former builds, we are unable to return to revert to the old layout. 

 

We wanted to communicate the permanency of these changes and work alongside our customers to ensure Community remains the best place for Marketo knowledge, best practices, and discussions.

 

If you have any questions about the new layout, feel free to leave a comment below.

Despite instructing a Community member to “search my posts” the other day, I ran a search myself and there wasn’t a one-stop explanation of what Do Not Track (DNT) means in Marketo (on a deeper technical level than you get on the official doc page). So here goes.

 

As you probably know already, there are 2 DNT options, Ignore and Support:

 

 

We won’t worry about Ignore.

 

But what does it really mean to choose Support? On a technical level, it means one specific thing:

 

If a user’s browser sends the DNT: 1 HTTP request header along with a Munchkin-logged pageview or link click, Marketo will not save the activity to the Activity Log database.

 

So here are some things Do Not Track = Support does not do:

  • it does not stop gathering Clicked Email stats: email clicks are still tracked unless you separately turn off link tracking
  • it does not stop Munchkin JS libraries from loading
  • it does not stop Munchkin from initializing and setting its _mkto_trk cookie
  • it does not stop Munchkin from sending a Visit Web Page (assuming you're using the default configuration which always sends a VWP on startup)
  • it does not stop Munchkin from sending a Clicked Link for <a> links on the page

 

But again, here's the very important thing it does do:

  • it stops the Marketo platform from storing the Visit Web Page and Clicked Link hits sent by Munchkin

 

 

Why not stop Munchkin completely?

It's not that Marketo would not like to be more proactive on the browser side, I'm sure. But the weirdest thing about DNT is there's no programmatic (let alone cross-browser) way to know if the user has set a preference! Ergo, you cannot know if the person would've wanted you to turn off Munchkin downloading/initialization/hit logging. You have to dumbly send the hit in all cases, then the server will discard it if it's accompanied by the “please ignore me” header.

 

The privacy appeal of having the DNT setting be unreadable in the browser is clear — it's the equivalent of an HTTP-only cookie that can't be seen from JavaScript — but it certainly creates confusion. For example, someone with DNT enabled who’s also running Ghostery or similar will still see that the Munchkin tracking JS was blocked, which is suboptimal: ideally, it wouldn’t show up at all. You might seem like you’re being worse corporate citizens than you actually are. (A link on your Privacy Policy confirming that you honor Do Not Track is useful.)

 

 

The browser's-eye view

The browser sending the DNT: 1 header is a prerequisite, of course. Privacy-oriented browsers do this by default; other browsers do it in Private/Incognito/InPrivate mode only; the the rest do it for all pages/tabs/windows when selected. Here's the setting in an older version of Chrome, for one of a zillion examples, which will send DNT: 1 for all pages viewed in this user profile:

 

 

And here’s a screenshot of the HTTP request for the main document, showing the header:

 

 

And Munchkin’s Visit Web Page XMLHttpRequest, showing the same HTTP request header and its acknowledgment in the response:

 

In this edition of Marketo Master Class, Marketo Champion Chelsea Kiko takes a deep dive into the process behind building a Center of Excellence (CoE). Chelsea covers various considerations for building a CoE in both a fresh Marketo instance and existing instance, among other tips and best practices. Read on to discover how to build and maintain your own CoE, from implementation to long-term execution.

 

1. What are the benefits of setting up a Center of Excellence (CoE) within Marketo?

 

Setting up a CoE in Marketo is great for consistency, accuracy, and scaling your Marketo instance. CoEs typically contain programs that are often repeated and have the same type of operational steps. Users can clone these to create their own programs, which provides easy access to high-quality programs and scalability for your strategies. The other great aspect about a CoE is it can be built by an expert (either internal or external) and used by new users in the instance with the same type of consistency across the board. This also saves time and increases efficiencies for your team.

 

2. What are the essential components of a CoE? What are aspects will vary based on use case?

 

 

The first essential component of a CoE program or folder is a naming convention. This is one of the most important pieces in my opinion for CoE programs because when programs are cloned out, the naming conventions are automatically adopted. You should set up naming conventions for all assets, including emails, landing pages, forms, smart campaigns, programs, etc. Having uniform naming conventions for each program is vital to keeping the instance clean and consistent even when various users are working in Marketo. As you can see in the above screenshots, everything is ready to go, even sample email naming conventions. This is all pre-canned so you can clone, update, and have a program done much quicker than if you were creating from scratch.

 

Next, it’s key to build out templates for any operational programs you want to run. This could include programs to: change program status, increase lead score for specific programs, sync to your CRM, send alerts, etc. Once you have them built in your CoE, they are ready for you to edit or optimize for each program right away.

 

Finally, it is helpful to create templates or template styles for specific programs in your CoE. Once you clone the template program and the modules or email layouts are ready, all you have to do is edit tokens or change content/swap images. Even though modular email 2.0 templates are easy to use, it still saves time to have the right modules and layout of your email ready for cloning, especially if it’s an event or nurture program where you have several emails.

 

Be mindful of the fact that when you’re building your CoE you will probably be using different program types. In the screenshot below, there is an email program for a one-time send, a basic nurture program, a more advanced nurture program, and a live event.

 

The aspects of the program templates that will vary based on use cases will be your tokens, operational programs, list arrangements, and reporting. If you have the structure set, changing these aspects is quick and easy. You can easily adapt your programs by editing your CoE program templates, allowing you to scale your instance.

 

 

 

 

**Tip: If you are an agency or consultant, it really helps to have operational programs in the CoE. You can even create these in your sandbox and import the programs into any of your clients to help them with data normalization, lead scoring (customize it on the client), deliverability programs, etc.

 

3. What are some best practices for creating a CoE in a fresh instance of Marketo vs. building a CoE in an existing instance?

 

Creating a CoE within a fresh instance is definitely a different strategy than building one in an existing instance. Both have their pros and cons but let’s start with the fresh instance. When you are new to an instance, sometimes it’s hard to decide what your CoE programs will be. What we normally do for clients is:

1) Host discovery sessions to see which programs they are envisioning

2) Map the programs out beforehand

3) Show the internal stakeholders or clients to gain approval before we build

 

This helps us ensure they are on the same page. Also, having a visual of how your CoE operates is a great training and educational tool to those stakeholders outside of the system who don’t need to know the details of the Marketo instance, but need to understand how it works.

 

For an instance that is already in use, it is important to measure first then determine what the CoE will contain. For example, are there any programs that need to be refreshed or changed? If so, incorporate that in your CoE. Check to see what programs are built over and over again and standardize these for your CoE so people can just clone a single template program and update minimally. This makes the CoE useful for each Marketo user and guarantees you’re looking at real data in your instance as you build your CoE out.

 

4. How do you recommend aligning with key stakeholders when planning out the CoE?

 

When aligning key stakeholders for a CoE, you don’t necessarily need to dive into the weeds of how it will operate in Marketo. What I normally do is map out an example program for them to understand and align it to their business needs.

 

For example, the screenshots below show a template map I put together for a healthcare client where the stakeholders own the cancer service line. The first image lays out the process template that would live in the COE, including a landing page and three different email streams for three different messaging focuses they could include for that line. The second image lays out a customized program that was cloned from that process template. You have to alter your message for which stakeholder you’re educating. Normally if it is a marketing manager, who won’t be spending much time in Marketo, mapping out higher-level strategy is the best method to get them on board.

 

 

5. What is important to consider when rolling the CoE out to your teams?

 

Make sure you host trainings for your Marketo users. Usually I have the flow maps ready and then host live trainings where I go into the instance and demonstrate how to clone and update certain programs. For example, you don’t need to train an event manager (who owns the Marketo event programs) on how to clone a nurture template, because it’s not relevant to their work. Customizing your training to your users is crucial for the success of your CoE.

 

6. How do you set up processes to maintain that CoE over time?

 

Setting up processes to maintain the CoE is important for the long-term health of your instance. Normally the Marketo expert/admin should bear the responsibility of updating and maintaining the CoE. For example, let’s say your Marketo users keep cloning a nurture template but are adding more operational programs and emails to it, that should denote a CoE change. Your CoE should always contain the most updated programs that are ready to be cloned for your team. Sure, customization is always going to be added, but if the same programs or emails or reports are being added time and time again, it’s time to refresh your CoE. Lastly, I recommend taking a quarterly look at your CoE – maybe you added a new reporting feature in Marketo that you forgot to take into account in your templates, or maybe you’ve altered your webinar strategy – all of those changes need to be audited and monitored in the CoE.

 

**Tip: when building out your CoE, always use the description field to explain what each program does. This ensures success and helps train Marketo users on each of your CoE programs. Example below.**

 

Filter Blog

By date: By tag: