-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed the problem where pump would get stuck when local pds are down #4377
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Codecov Report
@@ Coverage Diff @@
## master #4377 +/- ##
==========================================
+ Coverage 62.64% 66.22% +3.58%
==========================================
Files 184 188 +4
Lines 19575 21969 +2394
==========================================
+ Hits 12263 14550 +2287
- Misses 6166 6186 +20
- Partials 1146 1233 +87
|
@@ -401,8 +401,6 @@ var _ = ginkgo.Describe("[Across Kubernetes]", func() { | |||
tc1 := GetTCForAcrossKubernetes(ns1, tcName1, version, clusterDomain, nil) | |||
tc2 := GetTCForAcrossKubernetes(ns2, tcName2, version, clusterDomain, tc1) | |||
tc3 := GetTCForAcrossKubernetes(ns3, tcName3, version, clusterDomain, tc1) | |||
// FIXME(jsut1900): remove this after #4361 get fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why skip TiKV in L526?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have failed Tikv before failing PD, though it should make no difference to restart a failed tikv pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's different, the first part is to check that Pods can restart successfully after all TiKV down, and the second part is to check that Pods can restart successfully after all PD down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in #4382
/merge |
This pull request has been accepted and is ready to merge. Commit hash: 13bde1b
|
@just1900: Your PR was out of date, I have automatically updated it for you. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
/test pull-e2e-kind-br |
/test pull-e2e-kind-across-kubernetes |
/test pull-e2e-kind |
/test pull-e2e-kind-basic |
/test pull-e2e-kind-across-kubernetes |
/test pull-e2e-kind-br |
/run-all-tests |
/test pull-e2e-kind-basic |
/test pull-e2e-kind |
/test pull-e2e-kind-br |
/test pull-e2e-kind-across-kubernetes |
What problem does this PR solve?
Closes #4361
What is changed and how does it work?
For the 2 problems mentioned in #4361 :
add all peermembers to endpoints when initializing etcd client.
Instead of timedout every context in pumpclient, I just making the clientv3.New() to return error when underlying endpoints are not available(see clientv3: clientv3.New() won't return error when no endpoint is available etcd-io/etcd#9877), thus to avoid the following client call get stucked indefinitely.
Code changes
Tests
Side effects
Related changes
Release Notes
Please refer to Release Notes Language Style Guide before writing the release note.