Skip navigation
All Places > Champion Program > Blog > Authors Digital Pi

Champion Program

6 Posts authored by: Digital Pi Champion

Whether you came to my talk on the Analytics that Matter at Summit 2018 or not,  the real key to understanding how First Touch and Multi Touch Attribution is calculated in Marketo lies in this document.  Or rather your post Summit 2018 homework. Yes . . . I was that kinda TA. What happens when you reach success after the opp is created?  What happens if a person is acquired after the opp is created by placed on the opp before it closes?  All those questions you have will be answered by doing these word problems. 

This is used in Program Analyzer, Performance Insights, and Advanced Report Builder (aka Revenue Explorer/RCE)

 

You don't need a Marketo login.

Just take this document.  The Key to Finally Understanding how FT and MT Attribution are Calculated

  1. Print it out.
  2. Book a room for yourself and anyone who wants to do math word problems together for 45 minutes.  (Think college study group) Heck, make it a party and order pizza, or take one of your marketing team meetings and do it together. 
  3. Grab a pencil.
  4. Do the exercises.

 

This will be the most important 45 minutes you will ever spend if you really want to understand how attribution is calculated in Marketo.  You don't even have to read the first 2-3 pages.  The important thing you need to know lies in these rules below and the 20 exercises in the workbook attached.

Screen Shot 2018-05-06 at 1.21.28 PM.png

Note:  FT Rule #5 and MT Rule #4 are only applicable if you choose the Explicit setting. 

Screen Shot 2018-05-06 at 1.23.19 PM.png

Let me know how it goes.  Would love to hear what you guys think and if it was helpful. 

It is that time of year.  My favorite time of year and that's Marketing Nation Summit.  I've been thinking about all y'all in the Marketing nation all year to figure out what to bring to you to help us be bolder and more fearless marketers.  And for me. . . it's all about reporting.  All day everyday.  Just like last year, I'll be giving a talk on the Analytics that Matter Tuesday May 1st at 10:15am and will be posting my slides and recording here but there is only so much I can cram in to 45 minutes.  There are fundamental concepts about reporting and attribution that must be conveyed, absorbed, and understood. So whether you are going to summit or not these important concepts drive success or failure in your reporting.   There hasn't been a single place that all this information is provided until now. 

 

Marketo Revenue Attribution Explained

 

This is the Marketing Reporting and Attribution bible for Marketo.  If you really want to understand how to get reporting right in Marketo, take the 20 min or so and marinate on these core concepts.  This is what I live, eat, and breathe every day to make reports sing.

 

See y'all at Summit!!

 

PS There will be a pop quiz at my talk.

PPS Guess its not a pop quiz if you know ahead of time 

This year was my 5th Summit and I'm always envious of people that manage to pick the best sessions to attend.  The good news is that everything is recorded. The hard part is finding the time after we come back to dedicate to listening to the recordings.  For our last Marketo User Group meeting here in Silicon Valley, I listened to the top 8 sessions and summarized them for our local group as determined by attendance and rating.  Attached is the slide deck for your enjoyment.  Here's a brief summary of each of these sessions, who would benefit, key takeaways, etc.  Feel free to use the slide deck to present internally or help justify next year's trip to Summit. 

 

Session 1

7 Human behavior Hacks that Increase Engagement and Response

Nancy Harhut

Audience:  Marketing Practitioners

This session really gets into the human behavioral science realm of marketing.  It gave really great ideas on things to try for optimization on conversion rates, A/B testing, response to emails and landing pages.  If you want to geek out about what works for your audience and the science behind how people respond to digital assets, then this is a great talk to listen to.  

 

Session 2: 

Analytics that Matter: The Right Reports For Every Step of the Buyer's Journey

Jessica Kao

Audience:  Marketing Execs, Marketing Practitioners

We have so much data at our finger tips that too often we can get stuck in analysis paralysis.  There are so many reports that we can generate, which ones do I even start with? The fundamental principles that should guide what reports to be looking at for each stage of the funnel and how to prioritize.  Here's a hint:  Reports should lead to an action.  For those that want to get a handle around attribution and how Marketo calculates it, what it all means, and what questions are answered by First Touch and Multi-touch, this is the talk for you.  Think that reporting is out of reach because your instance isn't ready?  There are some quick wins that are covered about what you can do immediately to understand what is happening in your instance today. 

 

 

Session 3:

Look Sharp! Data Visualization for Marketing Ninjas

Martin Kihn

Audience:  Anyone who needs to make a dashboard of reports

At some point, we all have to make a dashboard of reports whether it's in SFDC, your CRM, Marketo, in PPT, etc.  But we never really think about there might be a method other than slapping a whole bunch of charts and graphs.  Usually its the last thing on our mind.  This was the first time I had heard a talk about  choosing different graphs or charts for displaying different types of data.  It makes complete sense that data visualization is in a way story telling and some thought should be invested into how we tell the story and not just what we are showing. 

 

Session 4:

Shake the Funk! The Data Behind Deliverability & How to Stay Clean

Jacob Hansen, Matt Rushing

Audience:  Marketing Ops, Keeper of the Marketo database

 

If you send out emails and care about email deliverability (which you should), then this talk is worth your while.  It goes into explaining what contributes to your sender reputation, how you monitor it, what you should do, and some basic technical concepts explained at a level for even the non techies. 

 

Session 5:

How to Build a Killer Content Marketing Strategy

Amanda Todorovich

Audience:  Content Marketers, anyone producing content that feels like they are staring up at an impossible mountain

The speaker was Content Marketing Institute Marketer of the Year 2016.  It's a great success story from Cleveland Clinic about their journey.  Prepare to be inspired.

 

Session 6:

Creating and Managing a Lead Lifecycle in Marketo That Will Make Your Sales Team Happy

Steve Susina

Audience:  Owner of the Lead Lifecycle, Marketing Operations, Marketing Practioner

This is the perfect how to, recipe/protocol on how to go about building your life cycle.  Just as important to actually building it in Marketo, is also involving the stake holders like sales, what meetings to have, what issues need to be discussed and agreed upon to ensure success as well as how to avoid the common pitfalls.  If you are wanting to build out a lead life cycle in your organization, than this is on your must watch list. 

 

Session 7:

Be the Exception! How Brilliant Marketers Get Bigger Results by Doing it Their Way

Jay Acunzo

Audience:  Anyone looking to be inspired

It's talks like this one is why I love summit.  I always walk away inspired.  Its a high energy talk that keeps you engaged the whole time about how to not be ordinary.  So, if you are staring out the window stuck at work on a sunny summer day, give this a listen.  It might perk you right up. 

 

Session 8:

Forrester Reveals the 7 Key Steps to Customer-Obsessed B2B Marketing

Lori Wizdo

Audience:  Marketing Executives, Marketing Practitioners

At first, I thought this was going to be about Customer Marketing, ie marketing to people that were already customers.  In actuality, it's about marketing to both prospects and customers but putting the customer persona, ie the person you are trying to sell to first.  Perhaps I was the only one that made that mistake and now I have admitted to all of you.  I hope you don't think any less of me.    Anyway, this is one of those talks that makes  you think, question how you are doing things now and how to fundamentally change the way you are marketing.  You have to wield enough power in your organization to actually make these cultural or mindset changes.  Or if you are a marketing practitioner its great food for thought for later on down the road when you are running the show. 

 

Bonus Session:

How to Maximize your Value – Results from the 2017 Marketo Compensation Survey

Jason Seeba and Inga Romanoff

Every year they put together the Marketo Compensation Survey and review the results at Summit.  Once this is out, I will post it here for everyone.  This is a must have if you are using Marketo, which presumably you are if you have made it this far down the post.  People have used this to negotiate their salary for new jobs, or to help justify yearly raises.  We talked about this at our last Marketo User Group.  Here in Silicon Valley, we've seen an uptick in folks renegotiating their salary every year rather than just being subjected to whatever meager raise.  One of these reasons is that if you've got Marketo skills, you can find another job pretty easily that's willing to pay more so keeping Marketo talent around becomes challenging, especially where we are. 

 

Were there any Summit talks that you found inspiring or helpful? 

 

The slide deck also includes the new changes for MCE, new product announcements, and snap shots of the compensation survey. 

You already know that Marketo isn't going to send the same email to the same person via the Customer Engagement Program. BUT. . . those aren't the only people that you want to exclude when you are running a nurture program.  You might be promoting a piece of content across 5 different channels and using multiple different emails.

What if someone attends a Webinar on “How to Snag Cool Marketo Swag at Summit” and you want to offer the recorded webinar in your nurture stream, how do you make sure that the person who attended that webinar doesn’t get that offer AGAIN.  Here are 4 steps to make sure that doesn’t happen.

 

Here's what we are going to build:

Screen Shot 2016-12-11 at 2.36.40 PM.png

 

Step 1:  Set up a Content Program for each piece of content that you are also promoting via nurtures. 

 

In this program "Content - Webinar Cool Swag", anyone who attended the live webinar, or watched the recorded webinar already needs to be added as a member in order to exclude them from receiving a nurture email with this webinar as the offer.   This would work the same way if it was a white paper.  Anyone from any channel that has downloaded a specific white paper would live in this program.  This program can be operational or not depending on your reporting needs.  But the important thing is that everyone you want to exclude for this specific nurture email resides in this program. 

 

Step 2:  Set up a Nurture Library

 

This step isn’t actually required to make the above happened but it’s more of a best practice and helps you keep things organized.  In this operational program, you can keep all your nurture emails here so that if you choose to use the same email (by email ID) in multiple streams or multiple programs, you will guarantee that folks definitely won’t get the same email twice. 

 

Step 3:  Set up your Smart Campaign to send the Nurture email from the Content Program

 

Hold up. What!?!  Yup that’s right.  Create a Smart Campaign (ie Nurture Send) in the Content Program, the same program where the members reside where you want to exclude folks from receiving the email.  This is the part where marketo logic just got flipped on it’s head.  Just stay with me.  Smart List must be Member of Engagement Program = True, Nurture Program Name. Add another filter to exclude Member of Program = False, Content - Webinar Cool Swag.  In the flow step, Send Email - email is choose your nurture email from the Nurture library.  You do not have to turn anything on.  You do not have to schedule anything. 

 

Screen Shot 2016-12-11 at 2.37.21 PM.png

Go to the Nurture Stream where you want this email to go out and click Add, select program, select the campaign “Nurture Send” and voila that’s it. 

Screen Shot 2016-12-11 at 2.35.45 PM.png

What will happen is anyone who is a part of the Content - Webinar Cool Swag program will not receive Nurture Email 1 offering this very cool webinar, but will get the next email in the stream all is good. 

 

You may be tempted to drag other filters in the smart campaign smart list. Resist the urge.   They will not work.*  When you use other filters, member of list not in XYZ (and these folks are not members of the Content - Webinar Cool Swag program,  those people who you want to exclude will be excluded from the nurture email, BUT they will not get another email.  They will be stuck in email nurture purgatory.  You have to turn the cadence off then back on for them to leave purgatory.  There have been lots of articles written on this. 

 

Step 4: Test the Nurture Program

 

Screen Shot 2016-12-11 at 2.35.57 PM.png

 

I mean really test it, not with just the test cadence (shown above) where it just sends out an email like send sample email.  Here’s a way that I came up with to quickly test whether the right people are getting the right email and being excluded from the right nurtures. And you don’t have to wait for Marketo to send out a cast.  The shortest amount of time that the program will cast is 24 hours.  I don’t know about you but I don’t have that kind of time to just sit and wait around. 

 

Say for example you have 4 nurture emails. 

Create a test list of leads that are new to the database.  This ensures that there aren’t any gremlins that are going to mess up your testing.

 

Screen Shot 2016-12-11 at 2.42.04 PM.png

 

I have 5 test leads.  The first will proceed as normal through the flows of all 4 emails.

Each of the others, I will add 1 lead to each content program for them to be excluded.

So Jess1 will be in the content program for Nurture Email 1 so it will not get Email 1

Jess 2 will be added as a member of the content program for Nurture Email 2 so it will not get Email 2 and so forth.  The chart delineates which email will be sent to whom and it what order. 

 

Upload these 5 test leads.

Add them to the appropriate content program.

Add all 5 leads to the engagement program and nurture stream.

 

When you are ready to test, set the first cast for an immediate cast.  (Make sure you are not sending to actual real people.)  Wait for the emails. Once you’ve received the emails, you can go back in and set a new time for the first cast (ie within the next 15 minutes) and let her roll and voila you can test your nurtures pretty quickly.  

 

And there you have it. 

We’ve all used Marketo or other automation tools to A/B test emails and landing pages. We do it because we want to optimize engagement through constant iterations, and we can use the results to give our content its best shot at provoking responses from our prospects.

 

But have you ever had the nagging feeling that your high school statistics teacher wouldn’t approve of your testing technique? You remember terms like sample size, variables, and p-value that were important parts of your hypothesis testing, but they all seem to be missing in Marketo’s tool today.

 

It turns out those principles we learned are still integral to executing a successful A/B test and preventing incorrect conclusions. Luckily you don’t need a stats degree to implement these principles and enhance the tests that your organization performs. So let’s dive into how to design and interpret a more meaningful Marketo A/B test.

 

I.  Designing your A/B Test

 

Your test design is the most important factor in determining whether you will get insightful information from your results. Over and over, we see the same common experimental design fallacies in tests run by marketers. Let’s take a look at what they are and how to overcome them.

 

Sample Size is Too Small

 

How large does my sample size really need to be? We get this question a lot and wish there was a definitive answer. But we would by lying to you if we said there was because it depends on how big the difference is that you want to see.

 

Say you want to do a simple subject line A/B test and you send to 1000 recipients.

 

Subject Line A: [Webinar] How to make the most of your A/B tests

Subject Line B: [Webinar] Register Now: How to make the most of your A/B tests

 

Half get Subject A and half get Subject B. If 6% open A and 7.4% open B, can you draw the conclusion that having a CTA “Register Now” performed better? Is the difference between A and B significant enough to declare that B is “better”? We can’t really answer that until we look at the p-value and how to get the p-value, which is covered later. For now, smaller p-values are better and in this case the p-value = 0.376 which is not good. You might think “Subject Line B still got higher number of opens, so why don’t we just go with that?” What the results are also saying is that the chances of you getting the opposite results if you ran the test again is pretty high.

 

If we run the test with 10,000 recipients total with the same percentages opening A and B respectively, the p-value is significantly smaller at 0.0051 which is excellent. (Scientific publication guidelines accept <0.05 and this is just marketing.) With the results from the second scenario you can confidently conclude that adding a CTA makes a difference. The combination of your target size and the difference between your two test groups determines what conclusions you can draw from your results.

 

Changing too many variables at once

 

As marketers we get excited about testing different variables. Sometimes we go overboard and test too many variables at once which leads to the failure to conclude anything. Let’s demonstrate with a landing page test.

 

Landing Page A: Blue button with CTA = Submit

Landing Page B: Green button with CTA = Download Now

 

In this case we have a question: Which button performs better? If Landing Page A has a significantly higher conversion rate than Landing Page B, what is my actionable intelligence moving forward? Unfortunately, we do not know if it is the color or the words on the button or both that was the contributing factor. (If you want to geek out this is called a confounded experiment.)

 

The proper way to carry this out is to break out the testing out into two rounds.

 

Test #1

Landing Page A: Blue button with CTA = Submit

Landing Page B: Green button with CTA = Submit

 

Result: Landing Page A performed significantly better.

 

Test #2

Landing Page A: Blue button with CTA = Download Now

Landing Page B: Green Button with CTA = Download Now

 

Result: Landing Page A performed significantly better.

 

Conclusion: LP with a blue button and an active CTA should be implemented.

 

If you vary multiple factors at once in the two test groups, you will not be able to conclude which of the variables that you changed contributed to the performance of one group over the other. Setting a series of tests to vary one variable at a time allows you to truly understand the contribution of each.



Testing without a clear question or hypothesis

 

Have you ever carried out an A/B test and then asked yourself “What do I do with the result? How can I apply this to future campaigns?” This confusion often occurs because you designed your test without a clear hypothesis.

 

Here’s an example of a subject line test with 6 groups.

 

A: Learn from CMOs: Engagement Strategies

B: How to effectively market to your prospects

C: Top strategies for engaging your prospects

D: Top strategies for reaching your prospects

E: Web Personalization: Reach and engage your prospects

F: Drive greater engagement this holiday season

 

If subject line C was declared the winner with the greatest number of clicks (albeit by a slim margin), what have we learned to apply for the next time? Also, with this many variables you will need a very large sample size to declare this result to be significant.

 

A better strategy would be to break out into a series of tests where we can test a single variable at a time with a clearly defined question or hypothesis.

 

Question #1: Does having CMO in the subject line drive more opens?

Test#1:

Subj A: Learn from CMOs: Engagement Strategies for your Marketing

Subj B: Learn Engagement Strategies for your Marketing

 

Question #2 Does the word “reaching” or “engaging” drive more opens?

Test #2 (Assuming CMO won test #1):

Subj A: Learn from CMOs: Top strategies for reaching your prospects

Subj B: Learn from CMOs: Top strategies for engaging your prospects

 

Question #3: Does mentioning “holiday season” results in a greater open rate?

Test #3 (Assuming reaching won test #2):

Subj A: Learn from CMOs: Top strategies for reaching your prospects

Subj B: Learn from CMOs: Top strategies for reaching your prospects this holiday season

 

Remember that it’s called an A/B test, not an A/B/C/D/E/F test. Break down your question into specific parts that can be tested in a series of A/B tests, rather than trying to get an immediate answer by testing all at once. The next time you are deciding what individual elements of a subject line will maximize engagement, you can look back at the results of these tests.

 

Using the Email Program A/B test results to declare a “Winner”

 

In the Marketo, it is really easy to set up an A/B test using the Email Program and see the results. Let’s go back to our simple subject line test for registering for a webinar.

 

Subject Line A: [Webinar] How to make the most of your A/B tests

Subject Line B: [Webinar] Register Now: How to make the most of your A/B tests

 

Say you have 50,000 leads in your target list and you choose to test 20% of your list and send the remainder the winner. That means 5,000 will get subject line A and 5,000 will get subject line B. The subject line that is declared the winner will be sent to the remaining 40,000. That sounds pretty straight forward. But (and you knew there was a but...) how is a winner determined and which one should you choose?

 

Marketo lets you set the winning criteria and automatically send the winner a minimum of 4 hours later. You can choose from the following:

 

Opens

Clicks

Clicks to Open %

Engagement Score

Custom Conversion

 

In this case if we choose opens, that means that the difference in the subject line is the difference in whether someone opened the email or not. Is this the behavior that matters most? In some cases that might be, but in a webinar we probably want to look at clicks instead. For example, we once saw an email that had the larger open rate also had less registrations and a 10 times higher unsubscribe rate. This led us to conclude that our message was not resonating with the target audience.

 

Setting the winning criteria to Clicks to Open % could also be problematic. If email A had 1000 opens and 40 clicks (4%) but email B had 200 opens and 20 clicks (10%), email B would be declared the winner even though the absolute number of people who clicked is lower.

 

What about setting the winning criteria to clicks? If Email A had 1000 clicks and Email B had 100 clicks, Email A would be declared the winner. But if the desired behavior is registering for the webinar and Email A had 10 people register for the webinar vs 25 for Email B, was email A really the “winner”?

 

So… which one should you pick?

 

Unfortunately you won’t know until you look at the data after the results come in. There is no way to predict. We can think of a potential situation where any of the choices above would work or not work, it will just depend on what the data says. So if you are going to declare a winner n a Marketo A/B test, we prefer to do it manually.

 

“When I test, I typically test on 100% of my target list. If I have an A/B test with 2 groups, I set the slider bar to 100%. That way, 50% get A and 50% get B. I do this for a number of reasons. Because, you won’t know if you have a large enough sample size until after the test. If you run 10 different tests on 1000 people and the difference is small, your results will all be inconclusive. I would rather run 1 test on 10,000 targets and get a really solid conclusion.“

 

When you are designing a test, ask yourself, “What am I going to do with this information? What am I going to change?” Don’t test for the sake of testing. Whatever you decide to test, ensure that the question you are asking is going to be actionable. Now that you know how to design robust A/B tests, how do you interpret those results?



II.  Testing and Interpretation of Results

 

  Setting up the test correctly is half the story, making sure that we are drawing the correct conclusions is the other half and just as important. 

Unfortunately, we cannot “declare a winner” by simply picking the test group with the most opens or clicks.  When we run a test we are saying, this small population of 1000 people is a representation of the whole universe.  It is not possible to test everyone in the whole world.  We are extrapolating that how this sample population behaves is going to predict how the rest of the world would behave.  But. . . we know that if we ran the test on 10 different sets of 1000 people, I would get slightly different results, so there is a chance albeit small, that I might have picked a sample population that is an outlier so different then the rest of the world my results could lead me astray.  This slight variation is what we need to account for by calculating a p-value. 

 

Let’s go back to our subject line test.

 

If you sent a total of 1000 emails and 30 people opened email A and 31 people opened email B, could you say email B leads to more opens? The answer is no (based on the calculation of the p-value).  Just because Opens of email B is > than opens of email A doesn’t mean that if you hypothetically ran the test again you would get the same results. In this case it’s about as good as flipping a coin. You could get either result. 

 

The real question in A/B testing is:  “Is the difference between A and B SIGNIFICANTLY different enough for you to draw the conclusion CONFIDENTLY that B is greater than A when you run the test again and again.  You want to be able to confidently say, based on the results of the test, I believe B will most likely yield more than A if I were to run the same test in the future.  Therefore, we should move forward with B.  That’s the goal.

 

To determine whether the difference is significant or not we look at the p-value of our test.  We are not going to go into how this value is calculated, but we will examine:

  1. How to use a very simple tool to obtain the p-value
  2. How to interpret the p-value
  3. What it means in plain english

 

You can use this website to input the results of your A/B test and generate a p-value.  (This calculator was posted by @ Phillip Wild.  A/B Testing and Statistical Significance.  Great suggestion) 

Let’s take a look at another example. 

You run an Email A/B test separated into groups with two different button colors, green and blue for the call to action. Your question is which button color is associated with more clicks. 

Green: 93 clicks on 4,000 emails delivered

Blue: 68 clicks on 4,000 emails delivered

 

You take the number of clicks for each group and plug them into the Calculatorhttp://www.measuringu.com/ab-calc.phpA/B test under the successes for each group. You enter 4000 into the total for each group. 

 

The resulting two-tail p-value = .047. 

It is generally accepted that a p-value of <=0.05 is considered a significant result.    The smaller the p-value, the better and more confident you can be in your results.  We can conclude that there is significantly higher number of clicks using Green vs Blue.  I am confident that if I were to run this experiment again and again, I would obtain the same result.  Therefore, I would make the recommendation to change the CTA button color to green. 


What does this p-value number mean in plain english?

A p-value of 0.047 is saying is that there is a 4.7% chance that you could have obtained these results by random chance and that if you were to run this experiment again you would not see the same result. 

 

What is so special about a p-value cut off of 0.05?

It is in fact an arbitrary cut off but is the absolute gold standard and is used in the scientific and medical community in the most highly respected peer reviewed publication.

If your p-value is slightly more than .05, say .052, don’t automatically write off the result as inconclusive. If you have the ability, test the same hypothesis again with a different or larger sample size. 

 

Note:  When using this tool, plug in your number of successes (opens, clicks etc.) and total (number of delivered emails) for each group. Note that when using click to open ratio, you will be using number of clicks as the success and number of opens as the total, NOT the number of emails sent.

 

This calculator gives us the p-value of the test, and we want to look at the two-tail value specifically. The p-value of a two-tail test represents the likelihood that there is a statistically significant difference in what we are measuring between the two groups in the test, compared to when there is actually no true difference. If the p-value is smaller than .05 we can conclude that there is a 95% or more chance that there is a difference between the two statistics (open rate, clicks) and act upon that in our decisions for future marketing communications. If the p-value is above .05, then the results of the test are inconclusive. This value and interpretation allows us to stay consistent from test to test.

 

A key here is to not consider the test a failure if the results are inconclusive (p-value is greater than .05). Knowing that changes to certain email content or timing won’t likely have an affect on your audience is just as useful for future communication strategies. If you still feel strongly that the first experiment wasn’t enough to capture the difference in your groups’ responses, then replicate the experiment to add to the strength of your results.

 

Organizing your results for future use

 

“As a lab scientist, I was taught to keep meticulous records of every experiment that I did. My professor once said to me, if you got hit by a bus or abducted by aliens I need to be able to reproduce and interpret what you did. As a marketer you probably don’t need to be that detailed but nonetheless it’s nice to have a record of what you have done so you can refer back to but more importantly share with your colleagues. For testing marketing campaigns, I kept a google doc, excel sheet, or a collection of paper napkins (true story). “

 

Keep a record of what the test was, the results, and the conclusions. And don’t be afraid to share your results in a presentation once a quarter. You immediately increase the value of your hard work by sharing your findings with your organization.

 

Here’s an example of a test result entry:

 

Aug 4, 2015

Test day of the week

 

Target Audience: All leads with job title = Manager, Director, VP

10,000 Leads

 

Email A - Send on Wednesday 10 AM

# Sent = 5,000

# Opens = 624

# Clicks = 65

# of Unsubscribes = 68

 

Email B - Send on Sunday 10 AM

# Sent = 5,000

# Opens = 580

# Clicks = 94

# of Unsubscribes = 74

 

P-value (Opens) = 0.176

P-value (Clicks) = .020

P-value (# Unsubscribes) = .612

 

Conclusion: Emails sent on Sunday resulted in more clicks, but there was not a difference in opens or unsubscribes.

 

If you clearly document and organize your test results, you’ll soon have a customer engagement reference guide that’s unique to your organization.  And if you’ve designed your experiments as advised above, you’ll know that the conclusions drawn are based on sound statistical analyses of your data. Put those “fire and forget” Marketo A/B tests to rest and you’ll make your way towards optimal customer engagement.


What is your experience with Marketo’s A/B testing? Have you found any results that are interesting or unexpected?  Feel free to share your experiences with testing.

 

I'd like to thank Nate Hall for co-authoring and editing this blog post. 

Are your Marketo First Touch and Multi Touch reports lying to you? The answer depends on what you did --- or didn’t do weeks or months ago when you setup and ran your marketing programs.  Getting Marketo First Touch and Multi-Touch attribution right depends on getting the right values in these Marketo native fields:  

 

  • Acquisition Program
  • Acquisition Date
  • Success (status within the program)
  • Success Date

 

I see many Marketo users discover that sins of the past – setting up and running programs incorrectly – come back to bite them when it’s reporting time. The truth is, even the most diligent Marketo user will now and then miss setting up one or more of these fields correctly to get the right value. So, it’s important that you know how to restate the data when you need to get your reports dialed. 

 

First Touch attribution helps you address the question what programs brought new names into the database that directly impacted pipeline and revenue.  Multi-touch attribution addresses what programs influenced and played a role in generating pipeline and revenue.

 

Let's look at how these fields impact attribution and how to restate the data. 

 

FT Attribution - Acquisition Program and Acquisition Date

FT credit is given based on the acquisition program.  As a marketer I want to get all the credit I rightfully deserve.  In order to get FT attribution all records should have an acquisition program. (Note:  For the people that are created from sales, set their acquisition program to a specific sales generated program and make that program operational.)  This will make it easier to identify any gaps. 

 

Setting the acquisition program isn't enough.  The date of acquisition matters and will affect whether you get FT pipeline credit. Therefore, in some cases you will also need to restate the acquisition date. 

 

Use Case #1

The person was given an acquisition program upon entry into the database.  However, it was not the correct acquisition program. 

 

Fix: 

Change the acquisition program.  The acquisition date does not have to be changed because the date is not linked to the specific program.  

 

Use Case #2

The person never had an acquisition program and entered your database sometime in the past. 

 

Fix:

Change the acquisition program date and set the acquisition program.  If you set the person’s acquisition program today, the acquisition date will also be set for today.  For an accurate picture of program influence on FT pipeline for historic data, then you will also have to restate acquisition date as best you can.  With a little bit of detective work and depending on your record keeping in the past, you will be able to.  If you are in a smart list and using a single flow action, change the acquisition date first, then change the acquisition program.  The reason for this is presumably in your smart list, one of your criteria is acquisition program is empty.  Once you assign people to an acquisition program, they are no longer empty.  I know this sounds obvious, but there have been  many times where I have said “oh S@!t.” Now I have to go and find them to change the date.  

 

 

MT Attribution - Success and Success Date 

After you have assigned people the correct acquisition program, you should go back and check to make sure that they are in the correct progression status.  Depending on whether the progression status is a success step will impact whether this program   will get MT credit for pipeline and revenue.  The same thing goes for the success date as well.  Depending on when that person reached success in relation to the opportunity created date, will impact whether that program gets credit for MT pipeline.

 

If placing the person in the acquisition program automatically puts them at a success step, then the success date is also set for the day that success step was reached (probably today). If you are backfilling historical data, you will need to also change the success date.  Unlike the acquisition date, changing the success date can only happen in a smart campaign since it has to be associated with a specific program.  You can not change the success date using a single flow action in a smart list.  Here is a quick chart showing what fields can be changed via which method.

Screen Shot 2016-09-08 at 8.12.11 AM.png

 

 

 

Use Case #1

Change someone’s progression status to a success status and you need to change their success date and they are not in the program already.   You will encounter this scenario if you are backfilling programs (i.e. tradeshow or events, etc) that happened in the past.  

 

Fix: 

You will need to set up two smart campaigns.  The first smart campaign is a batch campaign where it sets the acquisition program, acquisition date, and program status.  If this is a success step, then you will need a second smart campaign that is requested to set the success date.  Because setting the success date can only happen after a person is a member of the program,  use a request campaign flow step.   You cannot do this in a single smart campaign with multiple flow steps.  I tried this even with adding wait steps and it didn’t  work. 

 

Use Case #2

You know a group of people were acquired or obtained a success by filling out a specific form, but someone forgot to put them in a program for the past year, so what acquisition date and/or success date should I use for this group of people? How do accomplish this in the most efficient way possible. 

 

Fix:

First you need to decide how granular you want the data to be and that will change depending on how far back this specific activity happened.  Meaning, do I care that Joe Smith filled out a form in 2012 or April 2012 or April 7 2012?  Most likely, if an opportunity was created from Joe and it happened in 2013, MT credit will be assigned as long as the success date was sometime after 2012.  Plus if it’s not 100% accurate, I’m ok with that being so far in the past.  So for any filled out form activity that happened in 2012, I am ok with assigning the success date to be Dec 31, 2012.  At least I can compare year to year.  

 

As you get to more recent activity, you might want to be able to get to a more granular view.  So for activity in the past 12-18 months I might want to state successes in the month that it happened in.  So for any activity deemed a success for a particular program, I will set the success date for the last day of the month.  For example, I want to restate people who filled out a form to download content X.  For people that performed this activity between 1/1/2016 and 1/31/2016, I will set the success date to be 1/31/2016.  In a single flow step, choose the change success flow action and use add choice, you can either go by created date if you know for certain that the date will correlate with the success activity.  Or create a smart list for each of the specific actions happening in the time frame i.e. Filled out form Jan, Filled out Form Feb etc and use a flow step add choice where if member of smart list is in Filled out form Jan, set success date to 1/31/2016.

Screen Shot 2016-08-26 at 5.09.02 PM.png

Screen Shot 2016-08-26 at 5.08.51 PM.png

 

 

Summary:

When restating data, make sure you have accounted for what goes into each of these four fields:

 

Acquisition Program

Acquisition Date

Success (status within the program)

Success Date

 

. . . and your FT and MT attribution reports will make you look like a hero. 

 

Since you have spent the time restating this data, you probably never want to go through that again, so how do you set up programs to ensure that this data is being captured the right way in the first place.  Well, stay tuned for part II.  In the mean time, if you have any questions, just shoot me line.