Skip to content
This repository has been archived by the owner on Feb 1, 2021. It is now read-only.

Virtual Container IDs #600

Closed
aluzzardi opened this issue Apr 8, 2015 · 7 comments
Closed

Virtual Container IDs #600

aluzzardi opened this issue Apr 8, 2015 · 7 comments

Comments

@aluzzardi
Copy link
Contributor

Re-scheduling containers (see #599) means they will change ID. This is unacceptable since it's the only link between the user and a container.

Swarm should create a Virtual Swarm ID for every container that maps to an actual Physical ID from the engine.

Only the Virtual ID is exposed to the user, and Swarm maintains a mapping between the two. A rescheduled container will have a different Physical ID but the same Virtual ID.

Virtual IDs could be stored as labels, see #288.

This was referenced Apr 8, 2015
@smothiki
Copy link
Contributor

smothiki commented Apr 9, 2015

@aluzzardi what is the Use case for containers getting re-scheduled other than node fail-over. Any ways Swarm is trying to enforce a unique naming scheme for containers across the cluster . May I know more about how Virtual ID is useful.

@aluzzardi
Copy link
Contributor Author

This is work in progress, so feedback is very much welcome.

Yes, the only re-scheduling reason (for the foreseeable future) is node fail-over.

Some reasons why we would need this:

  • Centralized Container IDs: Right now, each engine creates its own IDs. They might conflict with each other on different engines. Creating them in a central place guarantees uniqueness.
  • Consistent IDs: When moving a container (for instance, in case of node fail-over), its ID will change. Presenting a Virtual ID to the user would prevent that.
  • ID instantly available: Right now, the scheduler (through docker create) must be synchronous since we need to return an ID through the API as soon as we return the call. Having a Virtual ID would allow Swarm to generate immediately a Virtual ID, put the container into the scheduler and return immediately to the user. Whenever the container gets actually scheduled, the Virtual ID will be mapped to an Engine ID.

In order to work easily with Virtual IDs, we would use labels to tag containers with their Virtual ID.
Running docker ps directly on an engine could display Virtual IDs as a label.
Running docker ps through Swarm could display the Engine ID as a label.

@aluzzardi aluzzardi added this to the 0.3.0 milestone Apr 10, 2015
@tnachen
Copy link
Contributor

tnachen commented Apr 13, 2015

So does all the docker commands will try to find the ID in both docker names/ids and also the labels virtual IDs then? ie: docker stop virtualId1

@vieux
Copy link
Contributor

vieux commented Apr 13, 2015

@tnachen yes, that's the plan. I also think that docker run should return the virtual ID, so you would use virtual ID all the time, and almost never actual container ID

@tnachen
Copy link
Contributor

tnachen commented Apr 13, 2015

cool, the user experience definitely becomes a bit interesting. I think we have to tell the user this is a virtual ID, since when docker ps is called users will get confused which one to look at.
And I'm curious what will we return with docker inspect (virtualID), seems like we can return all the container infos since it's an inhereit array. There is probably some things we don't want the user to do, like link volumes from a virtual ID and so forth, but ok for users to replace docker ID with virtual ID (docker run, etc). We probably will need some documentation to explain these.

@thefallentree
Copy link
Contributor

Docker use a UUID for a reason, but what's the point of using a generated ID in a swarm instance? the user should just be able to use a good name to define a "job" with a good name , that has a number of "tasks" (which is actually containers ) running in the swarm cluster.

@vieux
Copy link
Contributor

vieux commented May 27, 2015

Merged in #745

@vieux vieux closed this as completed May 27, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants