Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

snapshot issues #2370

Closed
5 tasks done
xiang90 opened this issue Feb 24, 2015 · 2 comments
Closed
5 tasks done

snapshot issues #2370

xiang90 opened this issue Feb 24, 2015 · 2 comments
Milestone

Comments

@xiang90
Copy link
Contributor

xiang90 commented Feb 24, 2015

@yichengq
Copy link
Contributor

I have played with large(~1G) incremental snapshot on AWS and GCE. etcd v3 could host ~1G data while making progress stably.

Moreover, i tested sending snapshot from leader to follower, and it works well in most cases.

The only exception is using standard persistent disk in GCE as both data-dir and wal-dir. Read requests to load snapshot and send it out will make WAL write requests wait too long, which affects heartbeat mechanism between leader and other followers, so the cluster may switch leader frequently and become unstable. Either changing to use local SSD, or using one disk for wal-dir and one local SSD for data-dir could bypass the problem. Our TODO is to find way to ensure WAL write requests is treated as high priority.

@yichengq
Copy link
Contributor

Close it because all issues are cleared.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants