-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incremental snapshot support #8888
Comments
@xiang90, Maintainers: Any opinion/updates about this issue? |
etcd/raft does snapshot internally to reclaim disk from WAL files. We do not support incremental snapshot from external. But that is something doable and wont be super hard. Do you have any interest working on it? |
Yeah sure. I'll take it forward. |
I think this is important for sane backup/restore strategy +1 |
@xiang90, Could you please point me to documentation on snapshots or explain in detail about how snapshot functionality is used in etcd. Some questions: 1) is there snapshot at raft log (wal) level and another snapshot at bbolt db level. after adding 900k keys to a 3 node cluster, i have restarted all three nodes. two of the nodes after restart were busy with exchanging snapshot as shown in the log below. Is one node trying to send entire db to another node or is it trying to send snapshot of log from last committed index 900168? Thank you |
Part of the non-voting member design describes non-disruptive snapshot transmission (#9161). Once that's in place, we could just enable it for etcdctl snapshot save? |
etcdctl snapshot save --forever could potentially starts a non-voting member and keep on receiving data and making snapshots. |
find no
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I was wondering, is there any incremental snapshot support for etcd? I don't see any command on
etcdctl
for same as of now. For use cases where one requires regular snapshots of etcd for backup purpose, having same data as part of multiple full snapshots is really costly from storage point of view.It would be nice to have this feature. Do you already have any plan for this?
As far as my investigation lead,
etcdctl
uses the snapshot/restore and watch based mechanism to sync the clusters. Why can't we use similar approach to have incremental snapshot? Here I propose, instead of keepingPUT
andDELETE
operation from watch events on destination client, we can dump it in json or some defined format in file sayinc_snapshot_fromRev_toRev
. So that at the time of restore, user can support backup directory with full snapshot and these incremental snapshot. And our restore command, with some additional flag to know that we have to restore from incremental snapshot list, will restore the etcd cluster. While restoring, it will restore cluster from base full snapshot. And then apply the operation from incremental snapshots sequentially using probably embedded etcd.The text was updated successfully, but these errors were encountered: