-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
snapshot issues #2370
Comments
I have played with large(~1G) incremental snapshot on AWS and GCE. etcd v3 could host ~1G data while making progress stably. Moreover, i tested sending snapshot from leader to follower, and it works well in most cases. The only exception is using standard persistent disk in GCE as both data-dir and wal-dir. Read requests to load snapshot and send it out will make WAL write requests wait too long, which affects heartbeat mechanism between leader and other followers, so the cluster may switch leader frequently and become unstable. Either changing to use local SSD, or using one disk for wal-dir and one local SSD for data-dir could bypass the problem. Our TODO is to find way to ensure WAL write requests is treated as high priority. |
Close it because all issues are cleared. |
[ ] saving snapshot should not block the raftNode loop- saving snapshot should not block raftNode loop #3714[ ] best effort snapshot pre-fetch when starting/restarting- best effort snapshot pre-fetch when starting/restarting #3715The text was updated successfully, but these errors were encountered: