Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi node support #9

Open
wants to merge 17 commits into
base: master
Choose a base branch
from
Open

Multi node support #9

wants to merge 17 commits into from

Conversation

flaeppe
Copy link

@flaeppe flaeppe commented Nov 26, 2019

Backwards compatibility

The changes here are not backwards compatible in the (odd?) case of previously having had this playbook run on multiple hosts.
We'd now expect a hosts file to refer to any nodes involved in a single swarm.

Backwards compatibility can be achieved, when above is false, by declaring any hosts deployed with the previous version under both docker_swarm_managers and docker_swarm_workers:

e.g.

Previous hosts file:

[managers]
host1

New hosts file:

[managers]
host1

[docker_swarm_managers]
host1

[docker_swarm_workers]
host1

As far as I see, it should only be necessary to change the hosts file.

Additional info

  • The swarm.yml task set asserts that there's an odd number of managers declared in the docker_swarm_managers group.

  • Node labels can be declared via the docker_swarm_labels (per host/group/etc) if wanted

  • Three new (default) variables: docker_swarm_interface, docker_swarm_addr and docker_swarm_port have been added

  • Would it be overkill to add a guard that validates that the worker group is non empty if all managers are declared as "manager only" nodes?

Some drawbacks

Demoting a manager node

As far as I could dig, I found no reasonable task/steps to let the playbook demote a manager (i.e. removing host from the manager group while having it in the worker group). Although handling that here could be quite an edge case, or perhaps at most a nice to have.

As of now, one could rearrange the hosts file so that the host only appears in the worker group and then manually run docker node demote NODE on the manager node to sync the swarm with the hosts file layout.

hosts groups declaration order

As some steps are run with flags run_once and delegate_to we expect the docker_swarm_managers group to be declared before the docker_swarm_workers group. Otherwise I suspect that a register variable is undefined and stuff crashes.

At least that's what I could find online. I haven't actually tested with a real hosts file just yet.

We should really test if this assumption is correct, would be nice if it isn't.

EDIT: This should no longer be an issue. See changes in: a9164fd for how it was resolved. Worth noting is also that the playbook with those fixes has been tested with a multi node setup (1 worker node, 1 manager only node) where the worker group appeared before the manager group in the hosts file

- Declare one manager node
- Declare one worker (only) node
- Create a new second manager node
- Handle an optional host variable 'docker_swarm_labels' (dict)
- Requirement to explicitly declare a node in both manager and worker
  groups to set a node as both worker and manager
- Allows us to create manager only nodes
- Per default a host only in the manager group is not a worker node
- Guards against declaring 'docker_swarm_managers' group as any even count of hosts
tasks/swarm.yml Outdated Show resolved Hide resolved
- 'docker_swarm_workers' and 'docker_swarm_managers' can now appear in
  any order in a hosts file
@flaeppe
Copy link
Author

flaeppe commented Nov 28, 2019

I have now tested the playbook with a multi node setup(1 worker node, 1 manager only node) successfully.

(hosts groups declaration order is no longer an issue either, this was also tested)

@beshrkayali
Copy link
Contributor

Nitpick: Would be nice to make tags either use dash - or _ as a word separator. I think all existing ones use a - so maybe we should make swarm task tags also use that? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants