-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OWLS-89106 - Potential fix for pod startup issue in GBU CNE environment after node drain/repave operation #2398
Conversation
|
||
@Override | ||
public NextAction onSuccess(Packet packet, CallResponse callResponse) { | ||
MakeRightDomainOperation makeRightDomainOperation = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if it is always correct for all cases to use the existing MakeRightDomainOperation object. For example, if the original object has an EventData, we may not know if the event has been generated already or not in the previous attempt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, Dongbo. That's a good point. In the latest integration test run, I'm seeing some new failures in ItKubernetesEvent and ItPodsRestart related to event generation. It looks like these failures are intermittent and timing-dependent. I'm not sure if I can create a new MakeRightDomainOperation object in this step. I'll continue looking into this tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@doxiao, I'm not sure but I wouldn't be surprised if there are some bug in our flow. The intended design is that the operator could die at any code point and have a new operator start and have to recover. Therefore, it's a bug if any of our code paths depend on in-memory state. Since updating the Domain status and generated Events is not transactional then there may be some gaps that are impossible to close, but we should for instance be able to start a rolling, have the operator die, be restarted, and then have the new operator complete the roll and properly generate the rolling completed event. I see these timeouts as being analogous. If a wait timesout and the make-right loop goes back to the "top" then it ought to complete similarly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ryan, I agree with you that the operator's processing flow should not depend on in-memory states. We could review the flow to make sure that is the case. But the issue here is a little different, I think - we are reusing a DomainMakeRightOperation object to do a new execution. I don't think the operator does this prior to this PR. EventData is just one piece of the data in the MakeRightDomainOperation object. We are probably better off creating a new MakeRightDomainOperation; the new code here already overrides some of the data in the existing object prior to calling execute().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not quite sure what you are proposing. What Anil and I originally discussed was to, on timeout, cancel the current fiber and start a new fiber for a new make-right operation. He found that he could instead take the current Fiber back to the top step using this pattern. That sounds reasonable. Is there just some other state in the current make right operation and/or Fiber that he needs to clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suggestion is to create a new MakeRightDomainOperation passing in the new DomainPresenceInfo, instead of reusing the current make right domain operation. This suggestion does not change the logic/spirit of the current changes. I discussed with Anil offline, and found out that apparently he had problem in creating a new make right domain operation instance within PodWatcher's static context. We could add a static method somewhere for this. For example, we could refactor the Main class a little and add a static factory method there for PodWatcher to use to create a new make right domain operation instance. What I like about using a new instance is that the code here would have full control of what it needs to do. Alternatively, we could have a clear() method on MakeRightDomainOperation to start clean if we know for sure there is no other fiber running using this make right domain operation instance. Then we have to keep the clear() method up-to-date when we add new variables into it in the future. I personally prefers the first option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have implemented the clear() method on MakeRightDomainOperation to reset its state before executing the new make-right operation in MakeRightDomainStep
. Thanks.
getWatchBackstopRecheckDelaySeconds(), TimeUnit.SECONDS); | ||
} else { | ||
// Watch backstop recheck count is more than configured recheck count, proceed to make-right step. | ||
return doNext(new CallBuilder().readDomainAsync(info.getDomainUid(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you reread the domain, but don't we also need to reread the pod and/or other services?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was following the approach in handleModifiedDomain
where we call the make-right domain with just the domain object received in the watch event.
private void handleModifiedDomain(Domain domain) {
LOGGER.fine(MessageKeys.WATCH_DOMAIN, domain.getDomainUid());
createMakeRightOperation(new DomainPresenceInfo(domain))
.interrupt()
.withEventData(EventItem.DOMAIN_CHANGED, null)
.execute();
}
…oid the intermittent test failures.
MakeRightDomainOperation makeRightDomainOperation = | ||
(MakeRightDomainOperation)packet.get(MAKE_RIGHT_DOMAIN_OPERATION); | ||
makeRightDomainOperation.setLiveInfo(new DomainPresenceInfo((Domain)callResponse.getResult())); | ||
callback.fiber.terminate(null, packet); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably need to call removeCallback here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made the change to remove callback here.
.ifPresent(i -> i.setServerPodFromEvent(getPodLabel(pod, LabelConstants.SERVERNAME_LABEL), pod)); | ||
} | ||
if (isReady(callResponse.getResult())) { | ||
if (isReady(callResponse.getResult()) || callback.didResume.get()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why we need to check callback.didResume here. How would the operator get here again if the callback has been resumed already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check callback.didResume is to avoid the race condition. We do the periodic listing of the introspector job to check if it's ready and also dispatch the callback when the watch event is received. In this scenario, the watch event notifications are flowing. After the intro job completed watch notification is received, the callback is removed and the fiber is resumed. After this, the intro job gets deleted but the child fiber that's listing the job periodically to check for readiness doesn't see the job as ready since it got deleted. Hence the child fiber never finishes and times out. The above check will terminate the child fiber after intro job has been deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand it correctly, check for didResume (didResumeFiber now) is only needed for the job case. If so, we need to remove the check in the pod code path,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The didResumeFiber
will return true only when the fiber is resumed and the resource should be ready at that time. I think there's no harm in keeping the check as that should avoid any potential race condition in pod case as well.
The integration tests are passing with current set of changes - https://build.weblogick8s.org:8443/job/weblogic-kubernetes-operator-kind-new/5310/ |
…sing the one in the packet.
…ment a clear method instead.
} | ||
} | ||
|
||
if (isReady(callResponse.getResult()) || callback.didResumeFiber()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to reset the recheckCount here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved the recheckCount
variable from DomainPresenceInfo to the Callback object since we create a separate callback instance for each resource. The WaitForReady step registers a new Callback instance in the resumeWhenReady
method and the previous callback will be removed once the fiber is resumed. With this approach, there's no need to reset the recheckCount since previous callback instance will be garbage collected once the resource is ready.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
We need to add unit test cases to cover the timeout code path.
Added new unit test |
(callback, info, next) -> createMakeDomainRightStep(callback, info, next); | ||
|
||
protected static Step createMakeDomainRightStep(WaitForReadyStep.Callback callback, | ||
DomainPresenceInfo info, Step next) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "next" parameter is not used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
OWLS-89106 - In GBU CNE environment with a large k8s cluster, some pods are not started after a node drain operation. The event notifications are not generated for pod delete/start and fiber gets stuck in a suspended state due to this. This change will interrupt the fiber in the
WaitForPodReady
step after a timeout interval (2 min) and execute a make-right domain operation.The
WaitForJobReady
step logic is not changed because the introspector job for the JRF domain is not fully idempotent. We also don't store the state of job pods in DomainPresenceInfo. Hence it doesn't need to be refreshed with an explicit relisting in case of a missed watch event.This PR also contains a change to ItParameterizedDomain.testMultiClustersRollingRestart as that test performs extra validation using domain events. With this change, the
DomainProcessingCompleted
event may not always be generated in time during the test validation. We cover domain events testing inItKubernetesEvents
and this extra validation is not necessary.The integration test run is clean with current set of changes - https://build.weblogick8s.org:8443/job/weblogic-kubernetes-operator-kind-new/5310/