You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deploy a config change to `sequencer-1` conductor to remove the
214
+
`OP_CONDUCTOR_PAUSED: true` flag and `OP_CONDUCTOR_RAFT_BOOTSTRAP` flag.
158
215
</Steps>
159
216
217
+
#### Blue/Green Deployment
218
+
219
+
In order to ensure there is no downtime when setting up conductor, you need to
220
+
have a deployment script that can update sequencers without network downtime.
221
+
222
+
An example of this workflow might look like:
223
+
224
+
1. Query current state of the network and determine which sequencer is
225
+
currently active (referred to as "original" sequencer below).
226
+
From the other available sequencers, choose a candidate sequencer.
227
+
2. Deploy the change to the candidate sequencer and then wait for it to sync
228
+
up to the original sequencer's unsafe head. You may want to check peer counts
229
+
and other important health metrics.
230
+
3. Stop the original sequencer using `admin_stopSequencer` which returns the
231
+
last inserted unsafe block hash. Wait for candidate sequencer to sync with
232
+
this returned hash in case there is a delta.
233
+
4. Start the candidate sequencer at the original's last inserted unsafe block
234
+
hash.
235
+
1. Here you can also execute additional check for unsafe head progression
236
+
and decide to roll back the change (stop the candidate sequencer, start the
237
+
original, rollback deployment of candidate, etc)
238
+
5. Deploy the change to the original sequencer, wait for it to sync to the
239
+
chain head. Execute health checks.
240
+
241
+
#### Post-Conductor Launch Deployments
242
+
243
+
After conductor is live, a similar canary style workflow is used to ensure
244
+
minimal downtime in case there is an issue with deployment:
245
+
246
+
1. Choose a candidate sequencer from the raft-cluster followers
247
+
2. Deploy to the candidate sequencer. Run health checks on the candidate.
248
+
3. Transfer leadership to the candidate sequencer using
249
+
`conductor_transferLeaderToServer`. Run health checks on the candidate.
250
+
4. Test if candidate is still the leader using `conductor_leader` after some
251
+
grace period (ex: 30 seconds)
252
+
1. If not, then there is likely an issue with the deployment. Roll back.
253
+
5. Upgrade the remaining sequencers, run healthchecks.
254
+
160
255
### Configuration Options
161
256
162
257
It is configured via its [flags / environment variables](https://github.com/ethereum-optimism/optimism/blob/develop/op-conductor/flags/flags.go)
@@ -495,14 +590,14 @@ AddServerAsVoter adds a server as a voter to the cluster.
495
590
<Tabs.Tab>
496
591
```sh
497
592
curl -X POST -H "Content-Type: application/json" --data \
0 commit comments