-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Is anyone think about machine scaling on swarm? #676
Comments
I'm looking at this myself- luckily, docker already integrates cpu usage, so once that is integrated into the swarm api, it should be possible to see how loaded each type of container is and spawn more. (and more machines) |
As @cultureulterior stated above, there is a way to scale beyond the initial Swarm agents by monitoring CPU and Memory usage on each node. This way when you reach a given threshold (let's say 80% of resources in use throughout the cluster), you create another Swarm agent to accommodate the demand and incoming future containers in the cluster. The same way, you can tell: "Most of the time my cluster is empty and I'm wasting resources so kill a few node because I don't need them." |
@abronan @cultureulterior thanks for your fast answers. So, that means it is possible to implement auto-scaling feature running on Should this be changed to a FR? Or it should be requested on composer? |
I've just done 100-node scaling using Anyway, with Global scheduling #601, |
This is probably outside of the scope of Swarm (at least for now). Swarm and Machine provide the APIs so that anyone can build a feature such as auto-scaling on top of them. |
Swarm looks like an amazing tools for deploying apps into cloud, but I have a doubt about how it would integrate with cloud-providers autoscaling.
Is there any integration between swarm and machine for docker-based scaling? Or sort of for each kind of driver?
The text was updated successfully, but these errors were encountered: