-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug? flake? TestWorkspaceController/add_a_shard_after_a_workspace_is_unschedulable,_expect_it_to_be_scheduled #2603
Comments
This appears to be a timing issue where the shard is deleted from etcd, but is still in the informer cache. The workspace controller sees the shard as valid, and marks the workspace as scheduled. @sttts Does that sound like a valid explanation of why this edge case might happen? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. /close |
@kcp-ci-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
From https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/kcp-dev_kcp/2602/pull-ci-kcp-dev-kcp-main-e2e-shared/1613279921029779456
After all the shards are deleted, we create a workspace, which should be unschedulable, but somehow it gets scheduled and then initialized?
The text was updated successfully, but these errors were encountered: