So you are saying that each smart list is basically a query result that gets updated every time the trigger is triggered, and until all query results have been delivered (i.e. first name is changed, so all smart lists and processes that are using that as a trigger are re-queried and the result returned) all other processes are on hold?
I'm sort of saying that, but I don't want you to get the idea that triggers block on all prospective or pending updates. Some things you might think of as an update from a user-facing perspective (like from a form post or list upload) aren't committed to the database for some time. Only after they're fully committed can there then be a question of whether a filter will see them or not.
Yes, caching heuristics and intermediate tiers in many systems may store and reuse the entire result set for a certain period: "until the underlying table/s are known to have been updated" is the lossless way
I take it Marketo operates like this most of the time with its smart lists right?
Let's just say that for your particular case a caching layer will never get in the way.
So marketo segments DO generate a field on the records that stores membership data... this paragraph I don't understand too well.
Correct. This flattened field is accessible in Velocity, for example.
But you only have 20 Segmentations in your whole instance. That's very different from the infinite number of Smart Lists that can be created -- and modified -- throughout your database.
Sanford, thank you very much for your lessons
I feel like these discussions are the ones that generate a deeper understanding of the tool.
I appreciate it.
Ok, so my takeaway from your answer is that its unreliable, same as Ronan, you are saying that if the data is already there when the trigger happens - it's relaible.
Otherwise, you can't be sure if it's going to work a 100% of the time.
You haven't said where you're getting the values from.
Again, if I post a form with a Last Name and the lead is created via that form, yes you *can* nest a Smart List that filters on Last Name. It will always work and isn't unreliable.
Here is another suggestion...
Never use "person created" as a trigger in your instance's smart campaigns. Always use instead "added to list: "Person created" ".
Create the list "Person created" via a smart campaign with the trigger "Person is created" (only this once, throughout your whole instance).
Perhaps this would add enough delay and guarantee that the record has been created with at least all of the values it first came with, so by the time you check for "added to list" in your campaign all of the information should be there already.
My concern here would be whether the instance could become sluggish, or whether there is an actual limit to the people that can be in a list.
What do you think?
As above: depends on the source of the data value. If you do Change Data Value and then Add to List, then the trigger on Added to List can be filtered on the new data value with 100% reliability.
If you do Call Webhook and then Add to List, then try to trigger on Added to List and filter on the data value, you have a race condition and your setup is not reliable.
If you Call Webhook and trigger on Data Value Changes, that's 100% reliable.
I never use Wait steps to anticipate data value changes. Not only is Wait clunky and itself unreliable, there's always another way.
Thanks for you answer!
So you suggest I break that into two campaigns that happen in sequence.
But doing that implicates duplicating the numbers of campaigns... perhaps in terms of maintainability not the best idea
The other way to look at it is to remove the filters from the smart list, then add a 2 minutes wait to the flow start, and use choices in your flow steps: if member of smart list then do nothing, default, send email.
Just keep in mind that, whatever the duration of the wait, in theory, there are always situation where some of your leads will miss the filter. These are situations in which the database update is not quick enough to guarantee all the data is posted to the underlying database. This means that if you are working on a heavily loaded database, you might want to extend the delay.
But that isn't true of this example. There's no race condition here and no need for an arbitrary wait.
Thanks for your suggestion. Yes, working within the flow steps seems like a 100% reliable method. I will definitely bear that in mind in general.
I am inclined to agree with Sanford though, wait steps are clunky and if you add enough in different places, your whole system will become vulnerable and hard to maintain.
Potentially the wait steps will interfere with each other and you will have a hard time figuring out why this or that is not working as expected.