This repository has been archived by the owner on Aug 2, 2022. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
record error and execution start/end time in AD result; handle except… #59
record error and execution start/end time in AD result; handle except… #59
Changes from 2 commits
638af5d
fcb2222
f15696c
84ea1c1
3ab3c17
9fcfd11
f965717
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would we have double lock release since your code in the try block can release the lock? How about adding a isLockReleasedOrExpired before release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recheck and find no double lock release.
LockService
already handles exception. Lock expire time is equals to detection interval, lock will not expire during job run.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was confused at the beginning when looking at this line. Wondering when retryTimes is 0. Then I realized every time a new AD job run would set retryTimes to 0. retryTimes is more of a signal to increment the count by 1 instead of the real retry times of the detector. Is my understanding correct?
I suggest the following to simplify:
First, detectorEndRunExceptionCount for a detector id is removed from the map whenever we have a successful run or an exception that is not EndRunException.
Second, every time an EndRunException exception is caught, add count. Insert the mapping if the detector id is not present. Then check if the count has reached the threshold, and stop if it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have removed from map for successful run. Will remove for not EndRunException case, this comment seems duplicate with another one.
We can't do this as we have backoff retry now. If we do this, for the same AD job run, if we backoff retry multiple times, the
detectorEndRunExceptionCount
will increase count and may terminate current job immediately when the count reach limit.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After much thought, I am still inclined to remove the backoff retry because we don't have a clear use case a quick retry is needed and there are complications for the backoff retry. This is the 3 cases where EndRunException is thrown with endNow being false:
Two of the causes are related to cold start. Let's discuss it one by one:
Some complications of retry quickly:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your analysis. Here the backoff retry is to resolve the transient exception.
Code start training data is not transient exception. We need to build finer granularity exception later to distinguish non-retryable and retryable exception. If we can't know which exception is transient and retryable in
AnomalyResultTransportAction
, I'm ok to remove the backoff retry now to avoid performance issue. But that's a tradeoff, as without retrying, some transient exception will cause current job run fail and if there is anomaly, user will miss it and will not get alerting notification. Sometimes missing anomaly¬ification is not acceptable. For example, current detection interval is 1hour, and there should be anomaly in current interval, some transient exception may fail current AD job, so no anomaly found and user never know it. Then we start next AD job, maybe there is no anomaly in next 1hour, user will never know something wrong happened. In one word, this is some tradeoff between protecting our performance, user experience and what we can do currently.So, can you help confirm if we can know which exception is retryable in
AnomalyResultTransportAction
? If we can't, will remove this backoff retry.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed offline, we can define some exceptions like fail to get RCF/threshold model result as retryable exception. Such exceptions are transient and maybe resolved by some backoff retry. Will add todo now and we can change it later.