I have a rather complex series of Velocity tokens that I use to handle several different pieces of information; as such, I've broken them into smaller, individual tokens that calculate a value and return the output over and over in slightly different ways. Otherwise, the script is identical.
Now, predictably, the more of these tokens I put in a given email, the longer it takes for the email to send (which I'm fine with), but it seems past a certain amount of these tokens, all of the token values fail. They simply stop processing and spit out variables rather than the calculated response. When I group them all back into one mega-token, it's fine again and all variables are correctly displaying their final values. If I use just a handful of the broken-out tokens, they also display their final values correctly. It's just when I try to use all the individual tokens at once that I have this issue.
All of this is to say: is there a maximum processing time allowed on an email scripts, has anyone encountered this before, and is it possible to get the timeout value extended for certain instances?
Is that number of tokens... 40?
Not even that many! I'm talking maybe a dozen. Interestingly enough, though, as things progress later in the day for this client, their ability to send test emails rendering those tokens takes longer and longer to deliver (to the point where some time out) and on their actual send, a couple of emails outright failed to render correctly out of a large batch.
It's kind of concerning at this point if load has this much sway.
Sounds more like a memory leak than load per se. (Just guessing based on symptoms.) I can run a tight loop of 1,000,000 iterations and still get tokens rendered <1s.
I'd like to work on this offline with you -- hit me up.