-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi node support #9
Open
flaeppe
wants to merge
17
commits into
5monkeys:master
Choose a base branch
from
flaeppe:multi-node
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Declare one manager node - Declare one worker (only) node - Create a new second manager node
- Handle an optional host variable 'docker_swarm_labels' (dict)
- Requirement to explicitly declare a node in both manager and worker groups to set a node as both worker and manager - Allows us to create manager only nodes - Per default a host only in the manager group is not a worker node
- Guards against declaring 'docker_swarm_managers' group as any even count of hosts
kjagiello
reviewed
Nov 26, 2019
- 'docker_swarm_workers' and 'docker_swarm_managers' can now appear in any order in a hosts file
I have now tested the playbook with a multi node setup(1 worker node, 1 manager only node) successfully. ( |
- Idempotence step has no debug output anyway
- Default 'travis_wait' increase at 20 min seems to be just on the edge
Nitpick: Would be nice to make tags either use dash |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backwards compatibility
The changes here are not backwards compatible in the (odd?) case of previously having had this playbook run on multiple hosts.
We'd now expect a hosts file to refer to any nodes involved in a single swarm.
Backwards compatibility can be achieved, when above is false, by declaring any hosts deployed with the previous version under both
docker_swarm_managers
anddocker_swarm_workers
:e.g.
Previous hosts file:
New hosts file:
As far as I see, it should only be necessary to change the hosts file.
Additional info
The swarm.yml task set asserts that there's an odd number of managers declared in the
docker_swarm_managers
group.Node labels can be declared via the
docker_swarm_labels
(per host/group/etc) if wantedThree new (default) variables:
docker_swarm_interface
,docker_swarm_addr
anddocker_swarm_port
have been addedWould it be overkill to add a guard that validates that the worker group is non empty if all managers are declared as "manager only" nodes?
Some drawbacks
Demoting a manager node
As far as I could dig, I found no reasonable task/steps to let the playbook demote a manager (i.e. removing host from the manager group while having it in the worker group). Although handling that here could be quite an edge case, or perhaps at most a nice to have.
As of now, one could rearrange the hosts file so that the host only appears in the worker group and then manually run
docker node demote NODE
on the manager node to sync the swarm with the hosts file layout.hosts groups declaration order
As some steps are run with flagsrun_once
anddelegate_to
we expect thedocker_swarm_managers
group to be declared before thedocker_swarm_workers
group. Otherwise I suspect that aregister
variable is undefined and stuff crashes.At least that's what I could find online. I haven't actually tested with a real hosts file just yet.We should really test if this assumption is correct, would be nice if it isn't.EDIT: This should no longer be an issue. See changes in: a9164fd for how it was resolved. Worth noting is also that the playbook with those fixes has been tested with a multi node setup (1 worker node, 1 manager only node) where the worker group appeared before the manager group in the hosts file