Chaining log entries across multiple executions #370
-
Hi Jonathan, I wanted to reach out regarding a funky thing I encountered that maybe you know about. Idea is the following: The Platform Event goes through 3 steps by executing one thing, publishing itself again, doing the second thing, publishing itself again, you get the gist. What I would really like is all three executions of the platform event to be condensed into one Log. Hope there's an easy out for that one! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hi @dschibster - the parent transaction UUID works a little differently from how you're envisioning it, there is not a way to do exactly what you want. A Instead of condensing all public with sharing class Account_Batch_Logger_Example implements Database.Batchable<SObject>, Database.Stateful {
private String originalTransactionId;
public Database.QueryLocator start(Database.BatchableContext batchableContext) {
// Each batchable method runs in a separate transaction,
// so store the first transaction ID to later relate the other transactions
this.originalTransactionId = Logger.getTransactionId();
Logger.info('Starting Account_Batch_Logger_Example');
Logger.saveLog();
// Just as an example, query 100 accounts
return Database.getQueryLocator([SELECT Id, Name, OwnerId, Owner.Name, Type FROM Account LIMIT 100]);
}
public void execute(Database.BatchableContext batchableContext, List<Account> scope) {
// One-time call (per transaction) to set the parent log
Logger.fine('this.originalTransactionId==' + this.originalTransactionId);
Logger.setParentLogTransactionId(this.originalTransactionId);
for (Account account : scope) {
// TODO add your batch job's logic
// Then log the result
Logger.info('Processed an account record', account);
}
Logger.debug('Saving account records', scope);
update scope;
Logger.saveLog();
}
public void finish(Database.BatchableContext batchableContext) {
// The finish method runs in yet-another transaction, so set the parent log again
Logger.fine('this.originalTransactionId==' + this.originalTransactionId);
Logger.setParentLogTransactionId(this.originalTransactionId);
Logger.info('Finishing running Account_Batch_Logger_Example');
Logger.saveLog();
}
} ...this will generate 3
By using
Hope this helps, but please let me know if you have any follow up questions! |
Beta Was this translation helpful? Give feedback.
-
I'm definitely seeing it now. Thanks for the clarification, I just didn't look thoroughly enough, but this already helps a lot. What I was thinking about instead of one Parent Log and several child logs was more of a "Log Group", so to say. I see the architectural vision behind having each transation condensed into one log, however in my case, the logs don't really have a hierarchy, but are more of just a sequence. Envision this.
Going to the parent log requires manual searching in my case. The potential benefit I see is that it can potentially help getting things in order outside of a Batch Job. I do get that this is quite a niche requirement, so I'm not asking you to implement this specific thing, but now I know how to look when I want to see one process in general. You could theoretically "treat" the aura controller log as the one that's the parent for my Platform Events, just not the way I originally envisioned it. :) |
Beta Was this translation helpful? Give feedback.
-
Hi @jongpie. Sorry for not following up earlier. Scenario-grouped logging should help me here! I'm probably going to be able to simply timestamp my scenarios to have them stay unique, and that way I have pseudo-group that I can look at whenever needed. I will however need to update my version of nebula logger before that. The newest update did however do the trick there! |
Beta Was this translation helpful? Give feedback.
Hi @dschibster - the parent transaction UUID works a little differently from how you're envisioning it, there is not a way to do exactly what you want. A
Log__c
record is meant to always reflect a single transaction in Salesforce, so when you have multiple transactions in a batch context (or queueable context), it's intentional behavior that multipleLog__c
records are still generated - if the data was instead condensed into 1Log__c
record, there would be missing/inaccurate/misleading data on the condensedLog__c
record, since aLog__c
has data about a specific transaction.Instead of condensing all
LogEntry__c
records into 1 singleLog__c
, Nebula Logger will still create separateLog__c
…