-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-watermark feature #696
Comments
@jframe let's look to prioritize this in the next few sprints. |
Hi. Just following up on the status of this feature? |
Prioritizing in our next release. @jframe FYSA |
@beetrootkid We are approaching a solution and would like some feedback please... For safety reasons - specifically in order to protect against two web3signer instances that share the same database from being misconfigured - we prefer the use of the database for storing the high-watermark. In order to protect against clock discrepancies in CL clients that still are running in the old cluster, we must leave the high-watermark in place until all instances using the high-watermark are decommissioned. Therefore the current preference is to use an 'offline' command similar to the current watermark-repair, for example: This would only need to be executed once on the old cluster's database. This is captured in the following ACs... Acceptance Criteria
Notes
Example command suggestions
|
Your proposed approach make sense for our usecase. Worth considering to add a way of consulting what is the current high watermark for confirmation purposes (could be a new GET endpoint) |
Tasks
|
You guys have recently implemented the watermark-repair subcommand which will specify that bellow a certain epoch, web3signer will refuse to sign. Wanted to ask, would it be simple to implement a similar feature, but instead web3signer would not sign beyond a certain epoch?
Context: we are running web3signer + vault + slashing db in an Azure cluster (lets call it
old cluster
). And now we'd like to move it to anew cluster
in AWS. In theory, we could achieve near-zero downtime and safely if:post epoch 160000, we would naturally teardown old cluster and keep only new cluster. The end goal for us is to move keys from one vault to another. That means we'd have to load same set of validator public keys in 2 different vaults and 1) not be slashed 2) minimal downtime 3) simple process without data migrations. We would expect to perform this scenario only when strictly necessary. If there a better way to achieve the same result, let us know. thanks!
The text was updated successfully, but these errors were encountered: