-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add testAndSet Command #2
Conversation
func TestAndSetHttpHandler(w http.ResponseWriter, req *http.Request) { | ||
key := req.URL.Path[len("/v1/testAndSet/"):] | ||
|
||
debug("[recv] POST http://%v/v1/testAndSet/%s", server.Name(), key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm, this bit of code seems to be copied for every handler. We may want to use something like Gorilla's logging handler:
http://www.gorillatoolkit.org/pkg/handlers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was using mux before, but found some problem it cannot deal with.
I will clean up the codes after a while.
We need both http and https support. I am going to abstract that layer out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the problem with the Mux? We are using it elsewhere so it would be good to know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since our keys contain '/'. like we need to handle http://127.0.0.1:4001/keys/foo/foo/foo with /key
I checked the mux at that time, and think do it by myself is more easy.
LGTM, go for it Xiang. |
fix registry.go: use the correct node name; self is already in the list
Clean up CONTRIBUTING.md and other bits of template-project
new etcd package
Current etcdserver just returns an error message "Internal Server Error" in a case of internal error. This message isn't friendly and not so useful for diagnosis. In addition, etcdctl just reports "client: etcd cluster is unavailable or misconfigured" in such a case. This commit improves the error message. The new body of error response is now generated based on an error code of etcdserver. The client constructs more friendly error message based on the response. Below is an example: Before: $ etcdctl member add infra6 http://127.0.0.1:32338 client: etcd cluster is unavailable or misconfigured After: $ etcdctl member add infra6 http://127.0.0.1:32338 error #0: client: etcd member http://127.0.0.1:12379: etcdserver: re-configuration failed due to not enough started members error etcd-io#1: client: etcd member http://127.0.0.1:22379: etcdserver: re-configuration failed due to not enough started members error etcd-io#2: client: etcd member http://127.0.0.1:32379: etcdserver: re-configuration failed due to not enough started members
Current etcdserver just returns an error message "Internal Server Error" in a case of internal error. This message isn't friendly and not so useful for diagnosis. In addition, etcdctl just reports "client: etcd cluster is unavailable or misconfigured" in such a case. This commit improves the error message. The new body of error response is now generated based on an error code of etcdserver. The client constructs more friendly error message based on the response. Below is an example: Before: $ etcdctl member add infra6 http://127.0.0.1:32338 client: etcd cluster is unavailable or misconfigured After: $ etcdctl member add infra6 http://127.0.0.1:32338 error #0: client: etcd member http://127.0.0.1:12379: etcdserver: re-configuration failed due to not enough started members error etcd-io#1: client: etcd member http://127.0.0.1:22379: etcdserver: re-configuration failed due to not enough started members error etcd-io#2: client: etcd member http://127.0.0.1:32379: etcdserver: re-configuration failed due to not enough started members
…t-test integration: test leasing cached get with concurrent puts
This pr adds a check in compaction to ensure that setting keep to nil only if this is the latest compaction in queue and merges the keep from prev ongoing compaction to this scheduled compaction allows HashKV to computes Hash from previous keep. Scenario #1 Suppose that HashKV() starts after compact(A) and compact(B) has scheduled and after completion of compact(A) but before the completion of compact(B). Before this PR: Completion of compact(A) sets keep to nil to indicate that there are no ongoing compactions. This assumption causes HashKV to hash all mvcc keys including keys that hasn't been deleted by compact(B) up to a rev. Hence, HashKV again after compact(B) returns a different Hash that during compact(B). After this PR: Completion of compact(A) sets keep to nil if current compactRev == compactRev(compact(A)). Since calling compact(B) changes current compactRev, then current compactRev != compactRev(compact(A)). Then HashKV will base on the keep set during compact(B) instead of nil which return the correct hash during compact(B). Scenario etcd-io#2 Suppose that HashKV() starts after compact(A) and compact(B) has scheduled and before the completion of compact(A) and compact(B). Before this PR: The start of compact(B) sets keep to compact(B) and overrides the keep from compact(A). Since compact(A) hasn't finished yet, HashKV needs to hash revision from keep of compact(A) but it is overridden. Hence HashKV misses revision from keep of compact(A) would result hash difference. After this PR: The start of compact(B) merges keep from compact(B) and compact(A). Even though compact(A) hasn't finished, HashKV can retrieve from keep of compact(A). Hence HashKV doesn't miss any revisions and will compute Hash correctly.
This pr adds a check in compaction to ensure that setting keep to nil only if this is the latest compaction in queue and merges the keep from prev ongoing compaction to this scheduled compaction allows HashKV to computes Hash from previous keep. Scenario #1 Suppose that HashKV() starts after compact(A) and compact(B) has scheduled and after completion of compact(A) but before the completion of compact(B). Before this PR: Completion of compact(A) sets keep to nil to indicate that there are no ongoing compactions. This assumption causes HashKV to hash all mvcc keys including keys that hasn't been deleted by compact(B) up to a rev. Hence, HashKV again after compact(B) returns a different Hash that during compact(B). After this PR: Completion of compact(A) sets keep to nil if current compactRev == compactRev(compact(A)). Since calling compact(B) changes current compactRev, then current compactRev != compactRev(compact(A)). Then HashKV will base on the keep set during compact(B) instead of nil which return the correct hash during compact(B). Scenario etcd-io#2 Suppose that HashKV() starts after compact(A) and compact(B) has scheduled and before the completion of compact(A) and compact(B). Before this PR: The start of compact(B) sets keep to compact(B) and overrides the keep from compact(A). Since compact(A) hasn't finished yet, HashKV needs to hash revision from keep of compact(A) but it is overridden. Hence HashKV misses revision from keep of compact(A) would result hash difference. After this PR: The start of compact(B) merges keep from compact(B) and compact(A). Even though compact(A) hasn't finished, HashKV can retrieve from keep of compact(A). Hence HashKV doesn't miss any revisions and will compute Hash correctly.
*: fix compilation after API change
$ govulncheck ./... govulncheck is an experimental tool. Share feedback at https://go.dev/s/govulncheck-feedback. Scanning for dependencies with known vulnerabilities... Found 1 known vulnerability. Vulnerability #1: GO-2022-1144 An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests. HTTP/2 server connections contain a cache of HTTP header keys sent by the client. While the total number of entries in this cache is capped, an attacker sending very large keys can cause the server to allocate approximately 64 MiB per open connection. Call stacks in your code: tools/etcd-dump-metrics/main.go:159:31: go.etcd.io/etcd/v3/tools/etcd-dump-metrics.main$4 calls go.etcd.io/etcd/server/v3/embed.StartEtcd, which eventually calls golang.org/x/net/http2.ConfigureServer$1 Found in: golang.org/x/net/http2@v0.2.0 Fixed in: golang.org/x/net/http2@v1.19.4 More info: https://pkg.go.dev/vuln/GO-2022-1144 Vulnerability #2: GO-2022-1144 An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests. HTTP/2 server connections contain a cache of HTTP header keys sent by the client. While the total number of entries in this cache is capped, an attacker sending very large keys can cause the server to allocate approximately 64 MiB per open connection. Call stacks in your code: contrib/lock/storage/storage.go:106:28: go.etcd.io/etcd/v3/contrib/lock/storage.main calls net/http.ListenAndServe contrib/raftexample/httpapi.go:113:31: go.etcd.io/etcd/v3/contrib/raftexample.serveHTTPKVAPI$1 calls net/http.Server.ListenAndServe tools/etcd-dump-metrics/main.go:159:31: go.etcd.io/etcd/v3/tools/etcd-dump-metrics.main$4 calls go.etcd.io/etcd/server/v3/embed.StartEtcd, which eventually calls net/http.Serve tools/etcd-dump-metrics/main.go:159:31: go.etcd.io/etcd/v3/tools/etcd-dump-metrics.main$4 calls go.etcd.io/etcd/server/v3/embed.StartEtcd, which eventually calls net/http.Server.Serve Found in: net/http@go1.19.3 Fixed in: net/http@go1.19.4 More info: https://pkg.go.dev/vuln/GO-2022-1144 Signed-off-by: Benjamin Wang <wachao@vmware.com>
tests/common: migrate auth tests #2
The command will test to see if the given prevValue is equal to the current value of the key. If equal, the value of the key will change to the new value.
Since the server will process all the in-coming requests in sequence and atomically, this command can help to do locking stuff.