Yes this is frustrating, it happens in many other operational programs and is usually referred to as "order of operations" issues.
You have to build in strategic wait steps to solve this.
If you can share some details about what the values are and what you are trying to do, I am sure we can help.
Hi Darrell -
As I said in my initial post, I have a workaround and wasn't looking for help. Thanks anyway for the offer!
1 of 1 people found this helpful
Denise, I 100% agree this is surprising behavior.
If we were designing an MA rules engine from scratch, surely we would make such a situation impossible. That might mean that the particular rules you've set up would also be impossible, but at least you couldn't create a non-deterministic ruleset (a ruleset that has a race condition under the hood so it means different things at different points in time).
To explain how something like this happens on a technical level...
Many high-volume systems are designed to be eventually consistent for performance reasons. This means that at any point in time, different "tiers" of the system (you can consider even the browser to be a tier in a web app) can see different versions of data, and some tiers can be slightly out of date (the definition of "slightly" is up to the the designer!). The whole ecosystem is constantly converging on a single version of data, but there's no guarantee that it will be there at a given point. At 3:00 exactly, everything might be in sync, while 10 milliseconds later there could be 3 different versions, all usable by different tiers, and that isn't considered a broken system.
For a simple, vendor-specific example, the ubiquitous MySQL database has a feature -- loved by many, hated by just as many -- called INSERT DELAYED, a non-standard extension to the standard INSERT. When you use INSERT DELAYED, the server tells the client app that a new row has been saved and returns control instantly, which is an incredible apparent performance boost. But the actual insert is performed asynchronously -- scheduled for some period in the future, during idle time -- and may not be completed when the client app quickly re-queries the server. Or, if there are other problems behind the scenes, it may not be completed, ever. The client can work with the data as if it's in the db (think of this as the trigger in your example) but can't reliably query the db (think of the filter). So people who hate DELAYED say "This violates the semantics of a database insert" by implying something is done, when it hasn't even begun. The approvers say, "So what, it usually finishes within 10 seconds under our current load, and we tell people not to expect up-to-date data immediately." You can see this difference of opinion is not easily resolved.
The alternative is to require that every change be committed (fully saved to disk) at every tier before any tier is able to consider the update complete. Yet with large systems, this is usually impossible. Even when it is technically possible, it can kill performance. For a very pertinent example, if the data in a Marketo form post needed to take effect throughout all of Marketo (including email sends in progress) before the end user regained control of their browser, that would be a terrible UX.
None of this is to excuse Marketo framing Data Value Changes that are eventually consistent as if they're synchronous or instantaneously consistent. I'm just explaining how the underlying technical decisions come about, and if they're properly documented and UI options built around them, they can be fine.
I don't really agree with Darrell that you can build "strategic wait steps" because there is no wait that is guaranteed to be long enough. In the INSERT DELAYED example above, while you can scale a back-end database to abide by an SLA like "inserts will be completed within 10 seconds of acknowledgment 99.999% of the time," but that leaves plenty of room for error in a busy system. Once updates are done asynchronously, you can't guess when/if they'll be completed successfully. (You might place a time limit on them finishing -- that is to say, either succeeding or failing -- with a server timeout, but not on guaranteed success.)
What I would do if re-building the Marketo rules engine is ensure that the field value that fired the Trigger executes is reused in the Filter, instead of re-querying the db. Basically create a special case where un-committed data is used to reduce user surprise.
Hi Sandy - Sorry it's taken me so long to answer. Thank you very much for your reply and the detailed explanation! If I could think of a succinct way to make this into an "Idea" I would. I find this very frustrating as it seems to make filters inherently unreliable. - Denise
My understanding is that the new value constraint in "data value changes" is delivering consistent data.
If so, wouldn't the simplest solution to this be that Marketo generates a "change data value" on record creation as well ?
This is a very common issue when coupling a “new person is created” trigger with the DVC trigger (since DVC only works for existing people in the DB) - where you must also include a DVC filter using the same constraints as the trigger. Another reason why many of us have a desire to be upgraded to Munchkin v2 - so that these replays will take place once the lead is known - and don’t need to rely on this common setup using “person is created” triggers.
Yes, it could! I just voted for the idea. Thank you!