Skip navigation
All Places > Champion Program > Blog > 2018 > July
2018

When building out various components of a demand generation strategy within Marketo, testing is an area that can often be overlooked. It’s tedious and time-consuming and not always the most exciting task, but the more complex the logic is within a component, the more important testing becomes. Something as simple as a missed flow step or incorrect trigger may have downstream implications that impact the accuracy of your team’s reporting, sales processes, or even external user experience. It’s much easier to catch and fix errors prior to going live than to attempt to trouble-shoot, fix and clean up data within live programming. A solid testing plan will help you do this.

 

What Should I Be Testing?

The answer here is – everything you build. This may include (but is not limited to) lead lifecycle programming, lead scoring, lead nurturing, channel tracking programming, data management programming, email compliance (CASL/GDPR) flows, integrations, and user-facing assets such as emails, landing pages and forms.

 

Testing Best Practices

Creating Scenarios

When you're testing a piece of programming, you'll want to make sure you're testing all possible scenarios that could happen to a record within that flow to ensure nothing is falling through the cracks. The best way to do this is to create a spreadsheet that lists out all of these scenarios, the starting attributes a record should have for each scenario, the test steps or updates you'll need to make to a record to simulate a scenario, and the end result that should happen if everything works properly. Include columns where you list the test record you used for each test, pass/fail, and any notes for failures. Remember to include edge cases, and not just perfect path scenarios.

 

 

Test Record Format

A standard naming convention for your test records is useful for several reasons. First, you'll want to be able to easily differentiate test records from live data if you're doing you're testing in a Marketo production instance rather than a sandbox. Second, for documentation purposes it’s good to have a way to quickly associate a test record with a particular scenario, especially if you're testing complex logic that requires multiple rounds of testing, fixes and re-testing. Our team uses the following standard format for our test records:

 

First Name: [tester first initial][tester last initial][yyyymmdd]-[numerical designation]

Last Name: Test

Email: [tester first initial][tester last initial][yyyymmdd]-[numerical designation]@test.com

Company: Test

 

So, for example:

 

Be sure to take into consideration any required fields as well as starting data attributes that are needed for your scenarios.

 

Testing in Production

Sometimes it will be necessary to test programming in a Marketo production instance rather than a sandbox. When you do this, you'll want to be cognizant of the fact that there may be live data, reporting or routing in place. There are a several measures you can take to mitigate the impact of your testing in a production instance:

  • When you build your Marketo programming in your production instance, add an extra filter to all of your Smart Campaigns so that only records with email addresses ending in @test.com can flow through, ensuring live data does not enter your flows before you're ready.
  • If your programming contains wait steps or time-based actions, reduce these to be a shorter amount of time if needed, to expedite testing.
  • If you are testing programming that lives completely in Marketo and is separate from any programming or processes that live in an integrated CRM, it may be useful to keep your test records within Marketo and not sync them to your CRM, if possible. If you have a centralized Smart Campaign that syncs records to a CRM either at creation or when certain attributes become populated, you could add a filter here for “Email Address NOT contains ‘@test.com’”.
  • If you do need to sync test records over to your CRM, consider suppressing @test.com domains from any lead routing or alerts you have built. If this isn't an option, communicate your testing plan to the users who are at the receiving end of those lead assignments or alerts.
  • Consider adding filters to reports or dashboards that your team leverages, to suppress records with @test.com domains.

 

Documenting Test Results

Documentation is an important part of the testing process. It helps you keep track of your progress, and down the road if something breaks you'll have a documented history of when it was working, which can help with trouble-shooting. As you go through your test scenarios, mark off each pass and fail. For the failures, make note of what went wrong. When you complete your scenarios, go back to the failures and determine what fix is needed, then re-test afterward, indicating the passes and failures for the additional rounds within the same spreadsheet. If you're testing programming that was built by someone else, look for trends around what’s failing to summarize for the builder, to help them identify where an error might have occurred in their programming.

 

Testing Tips for Common Components

Below are some testing tips for common components that you may have built or are planning to incorporate into your Marketo infrastructure.

 

Lead Scoring

When you design and implement lead scoring, it’s a good practice to test out your scoring model itself prior to building it in Marketo. This is particularly important if you have score thresholds aligned to certain stages of your Revenue Cycle Modeler. Run some tests to make sure that if someone does a certain amount of activities, or submits your progressive form a certain number of times, they accumulate enough behavioral points from the activities and demographic points from data entered via form submission to reach the score threshold for the stage you consider them to be in. Check your math. If you document your scoring model in a spreadsheet, you can create Excel formulas to quickly sum up a person’s score when you simulate various activities.

 

Once you finalize your scoring model and build it in Marketo, you'll need to test all of your triggers and flows. Ensure the correct amount of points are being appended to your demographic, behavioral, and total lead score fields as defined in your scoring model. If you're building multiple scoring models, ensure the correct score fields are leveraged throughout each build.

 

Revenue Cycle Modeler

Before you build your RCM, it’s a good practice (and not just for testing purposes) to document the criteria a record must possess to move into each stage – scoring thresholds and any field attributes. When you're creating your RCM test scenarios you can use this as a guideline. You should test every trigger and flow step within each Smart Campaign that is leveraged in your RCM logic. Consider any skip or turn-back logic you have incorporated, where a person can move from one stage to a non-subsequent stage. If your RCM listens for contacts to be added to opportunity records in your CRM, then you'll want to simulate this behavior within your CRM (if you're testing an RCM in a Marketo production instance, be aware of implications this will have in your CRM production instance). Testing an RCM is often more time-consuming than building it.

 

 

Channel Tracking Programs

It’s likely that your team is leveraging Marketo Programs, channels and triggered Smart Campaigns in some capacity to capture content interactions across various engagement channels. A common approach to this also involves a URL querystring strategy so that a single landing page can be promoted across channels. Anytime you build a new tracking program it’s best to visit the corresponding URL with querystring, submit the form, and ensure the correct Smart Campaigns are triggering, and appropriate actions are then taken to the newly created record (depending on your architecture this may be data value updates, a confirmation email, and/or a Marketo Program status update).

 

 

One-off Batch Email Sends

Run a test record through each Smart Campaign used within your Marketo Program. Ensure Marketo Program Status updates, wait steps and re-sends are all happening as intended. If you're using filter logic, run some checks to ensure there are no errors here. Before you schedule your batch campaign make sure that the number of people expected to run through the Smart Campaign is in line with what you'd expect, and ensure your Smart Campaign settings are correct.

 

 

Automated Outbound Nurture

Whether you're leveraging Marketo Engagement Programs to house your nurture logic, or building “from scratch” with traffic cops, watchdogs, and wait steps, you should be sure to test all logic that moves people into, out of, and within your nurture program. Ensure that only the people you want nurtured are able to flow in, and make sure that if a person’s data profile changes then they are moved to the correct new spot within the nurture program, or out of nurture completely. Furthermore, ensure that the correct emails are being sent at the correct points in time, and that interactions with the promoted content in those emails are accurately tracked. You'll also want to test the emails themselves – send tests to designated people on your team to ensure links are correct, and everything renders appropriately.

Just when you thought the topic of GDPR might settle down, it’s still hot news. A little more than a month after the enforcement date, big names are reported for compliance violations, major US publishers block European visitors, and data privacy measures get a little closer to home.

Forced Consent Complaints

It wasn’t much past midnight on GDPR’s official enforcement date when the first complaints were filed. Apparently, tech giants make for easy targets with a slew of complaints filed against Google and Facebook, claiming forced consent. In other words, both platforms require users to give “all or nothing” consent in order to use their respective software vs. parsing data consent areas and allowing users to provide individual consent for each use. Similar complaints have since been filed against Apple, Amazon and LinkedIn. Are the violations legitimate? All are still pending; no resolution or fines have been assessed.

Blocked Media Sites

Some major US publishers have taken a different route to GDPR compliance by blocking EU visitors entirely. The Los Angeles Times and the Chicago Tribune are two of the bigger media companies blocking EU visitors due to non-compliance of ad targeting practices. Other publishers, including USA Today, are displaying non-targeted ads while Meredith and The Washington Post have started asking permission to new site terms to view their sites, including an upsell ad-free option. Publishers—particularly The Los Angeles Times—need to get this figured out as the data privacy landscape is about to get even more complicated.

The Golden State Adopts GDPR-Like Legislation

Barely one month after GDPR went into effect, California Governor Jerry Brown signed The California Consumer Privacy Act, aimed at protecting the data privacy rights of California residents. Much like GDPR, California’s act seeks to give consumers more control over personal data usage, including the right to know how data will be used, what data is being collected and sold, and the right for complete data deletion. The bill, still in early stages, will likely be amended before the enforcement date of January 1, 2020. And if you think this is just hype or California making noise, keep in mind California was the initiator of anti-spam email statutes, later to be replaced by the federal legislation we now know as the CAN-SPAM Act. Privacy legislation is coming to United States—be prepared!

GDPR—Still on the Radar

In just the first month of enforcement, we’ve seen complaints filed, organizations suspending service to Europeans, and copy-cat legislation emerge. The bottom line in all of this is, best data practices need to be our baseline standard. GDPR’s enforcement date is just the beginning; taking proactive measures now will ensure you’re prepared for new legislation, without interruption to your business operations. Recommended reading:

 

How to Avoid a €20 Mistake with your Data: Tips for ensuring your database is clean, junk records removed, and country data normalized.

 

Requirements for Consent – What You Need to Know: Understand what GDPR requires for consent plus how it compares to CASL requirements.

 

And of course, leave your comments below and together, we’ll support each other through another round of compliance preparations.

 

 

As originally published on the Perkuto blog.

Sometimes I think asking, “Which attribution model do you prefer and why?” would be a great (marketing) conversation starter. From single-touch to complex regression-based analysis, some marketers are passionate about a particular method while others are still contemplating which is the best option. The topic sparks an interesting discussion.

 

Of course, all models are simplified approximations of an infinitely complex reality, and, no attribution model is perfect. Attribution models attempt to estimate the influence of your various marketing campaigns on human behavior that is unpredictable, irrational and fluid in nature. There’s no way of actually knowing that your white paper or webinar was responsible for 33% of the purchasing decision and therefore should receive a third of the credit. But, even with the flaws of attribution, applying the appropriate model, understanding the data it’s generating, and applying the directional insights will help you make better marketing decisions.

 

In this post, we’ll explore the different models and why you might use each one.

 

Getting Started with Attribution

 

Before settling in on a particular attribution model, assessing your needs and being realistic about what you want to accomplish will assist in your model decision.

 

  • What questions are you trying to answer? Contribution to revenue, pipeline value, understanding your sales cycle from the first touch to deal closed, which efforts are most impactful, why leads aren’t converting to sales—what do you need answered to sharpen your marketing plan and align reporting with your organization’s objectives?
  • Is your sales cycle simple or complex? Do you have a lot of marketing efforts or only a few?
  • What’s attainable for your organization? Do you have the appropriate tools in place? Are you just starting with attribution or are you more experienced?

 

Once you know what you want to achieve, then you can select a model that’s appropriate for you. (And I should note, unless otherwise stated below, all models discussed are as defined within Bizible’s platform.)

 

First-Touch Model

 

Stemming from the philosophy that a sale cannot happen if a customer doesn’t know you exist, a first-touch model applies 100% of attribution credit to the first tracked marketing interaction, which may occur before the person even enters your marketing database. The model itself is simple, and data analysis is less complicated. In a simple sales cycle with a quick or transactional sale, it’s very easy to see marketing effectiveness and contribution to revenue. The challenge with a first-touch model is data collection, because you need a way to capture and store the anonymous first touch and then associate it with the person when they eventually enter your lead database. You can solve for this challenge with custom tracking script and Bizible tracks this out of the box.

 

In more complex sales cycles, first-touch attribution acknowledges the brand awareness stage, highlighting which of your early marketing efforts were most successful at attracting new customers to your product or service. If you seek to gain insight into top-of-funnel activity, then a first-touch model can be useful in providing answers. If you want to know marketing influence in later stages of your sales cycle, a first-touch model falls short as it only tells part of the story by overvaluing early-stage efforts and ignoring subsequent campaigns.

 

Lead Creation Model

 

Going beyond brand awareness, a lead creation model attributes 100% credit at the point a customer is interested enough to provide contact information and essentially, becomes a “lead.” For example, if a customer visits your website three times and on the fourth occasion, completes a form for more information, the marketing effort that drove the fourth visit would receive 100% of revenue credit. The philosophy here is the campaign that converted a prospect to a lead is the most significant. Many organizations often start with a lead creation model because it provides an excellent introduction to attribution and the set-up is relatively straightforward.

 

Like first-touch, this single touch, simple model does not provide a good representation of longer and more complex sales cycles; for that, you need a multi-touch model.

 

Evenly-Split/Linear Model

 

A Linear or Evenly-Split model gives equal weight to every touchpoint with the rationale that every marketing effort is essential to moving a prospect through the sales pipeline. The challenge with this model is it oversimplifies the marketing process and fails to take into account the context of when the interaction occurred when giving credit.

 

For example, let’s say a person enters your database, consumes a few blog posts and then - a few months later - attends a VIP dinner and soon after is added to a new opportunity. With an evenly-split model, the casual content consumption that did not occur in proximity to any meaningful funnel event would get the same amount of credit as the high-touch dinner that likely made a much bigger impact on the sale. If you relied on this model exclusively, you might easily draw some inaccurate conclusions about relative channel importance.

 

Nevertheless, a Linear model can still provide some insight into which marketing programs are impactful. If you are tracking attribution using Marketo and Revenue Explorer, this is the only multi-touch option available.

 

U-Shaped Model

 

U-Shape is a simple multi-touch model that distributes credit between the early-stage touches to provide a more balanced view of which channels are generating new names in your database. In this multi-touch model, 50% of the weight is assigned to the  first touch and 50% to the lead creation touch. The philosophy behind it is to emphasize lead generation while also sharing credit between the various touches required to grow your database. For this reason, I prefer it over either a First-Touch or Lead Creation single-touch model for evaluating lead generation activities.

 

W-Shaped Model

 

W-Shaped Model

 

A W-shaped model is very similar to a U-shaped model except it acknowledges a third milestone, opportunity creation. Each primary stage of the sales cycle, first touch, lead creation and opportunity creation, is attributed with 30% of revenue and the remaining 10% is split between the other touchpoints. A W-shaped model is one of the most popular attribution models as it gives marketers a well-rounded view of the marketing campaigns leading up to the opportunity creation stage.

 

What’s missing in a W-Shaped model is insight into any activities that occur after the opportunity is created. For example, let’s say you organize a special event for customers and later stage prospects and then several opportunities close soon after. With a W-Shaped model, the significant investment in this event wouldn’t receive any credit.

 

Full-Path Model

 

Full Path Model

 

Similar to the W-shaped model, a full-path model also acknowledges major milestones in the sales cycle, now extending all the way through the revenue stage. Each significant stage receives 22.5% of the credit with the remaining 10% spread across touchpoints in between.  The Full-Path model is obviously more complete than the W-Shaped model and is arguably more sensibly-weighted than an evenly-split / linear model, as the touchpoints that occurred in nearest proximity to important funnel events get a much higher percentage of credit. This can produce reports that better reflect the “actual” impact of these important activities while still giving credit to everything.  For businesses with a complex sales cycle who want full visibility, a Full-Path model is a smart choice and remains easy and simple to implement.

 

Custom Model

 

A more advanced multi-touch option within Bizible is the Custom model.  With this model, you can define custom stages in the sales cycle in addition to those included in the Full-Path model—a common one to add is an “MQL” stage. You can then define your own percentage weightings for each stage based on your unique business model. Notice in this example, that the product demo stage is now receiving 10% credit, demonstrating the perceived significance of this event in the sales cycle.

 

This model offers more flexibility and requires some extra configuration. Its relative freedom also brings a certain level of risk, as the marketer might have inaccurate assumptions about the relative weightings that the different stages should receive and thereby create misleading distortions in the model.

 

Companies may want to run a Full-Path model first, then as knowledge of their unique sales process deepens, transition to a Custom model to achieve a more tailored approach.

 

Custom model

 

Machine-Learning Model

 

This model uses the same stages as the Custom model, but in this case, the machine makes recommendations for weighting credit between the various stages, representing the relative importance to winning a deal based on three criteria:

 

  • Predictiveness: the correlation between stage progression and whether the deal will close
  • Ease/Difficulty: high conversion rate implies less importance in the customer journey
  • Uniqueness: if a touchpoint is shared with multiple stages, the credit is shared, too

 

The algorithm is not random—Bizible based it on millions of touchpoints and buyer journeys. You can use the insights from the Machine-Learning model to refine and alter your Custom model, ultimately producing a machine-learning influenced model that incorporates human insights specific to your organization.

 

Tactic-Weighted Model

 

In a Tactic-Weighted model, credit is allocated based on the importance of the specific marketing tactics involved. For example, attending a webinar may get more credit than downloading an e-book, and attending a prospect VIP dinner may get even more.

 

This type of model—or one that blends it with a position-based model defined above—makes a lot of sense to many marketers, who intuitively know that spending four hours at a high-touch event naturally carries more weight than casually perusing web content.

 

This is an advanced model that is not available out of the box in any platform I’m aware of, but is something an analytically-mature organization could engineer within a BI tool.

 

That’s a Lot of Models!

 

One of the nice things about Marketo’s acquisition of Bizible, is marketers now have more model options to choose from, single-touch to multi-touch, simple to complex. To some, the options may seem overwhelming. My advice: take an inventory of your needs and start with what’s attainable. Remember, you can always transition to a new model as your knowledge and understanding grows. No model is perfect, but attribution will help you gain insight into your customer journey and the relative influence of your marketing efforts.

 

In my next post, I’ll address how to leverage your attribution data to fine-tune your marketing strategy.

 

Want a deeper dive?

 

I'll be presenting a webinar, Bizible Essentials for Marketo Users on July 10 at 1:00 ET . We’ll explore the differences between Revenue Explorer and Bizible, the solutions Bizible offers and the impact on your daily operations. Reserve your seat here.http://bit.ly/2KRmXBF

 

And to go completely meta, here's a Bizible report showing the registrations by channel for the Bizible webinar so far. This offer-by-channel report is really easy to produce in Bizible, and I'll describe how at the webinar.

 

Bizible_Webinar_Reg_by_Channel___Salesforce_-_Enterprise_Edition.png