-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container tagging / naming #1
Comments
docker is a manager so it should play index. a pid file approach would suck (means containers have to know their id, be able to write it somewhere). and any third party tool would have to work with docker to figure out the id of every container somehow, and that's already a problem. but that problem would go away if you knew the way to reference a container before you made it. |
I was skeptical of tags. But I think containers should work similarly to images now. I don't know about the name "repository" but having a base name and optional tag would be nice to help look up ids. In theory, you should never have to work with ids. Imagine a PaaS and you named containers (using dotcloud terms) < user>/< app>/< server>.< instance> and then tags would be used for deployments. v1, v2, v3 ... they'd all have their own ids and all be unique containers. |
+1, names would let me autoconfigure based on convention, like memcacheXX containers would get written to a memcached.yml, or mysql-master, mysql-replica would get paired together without each knowing about the other. The devs using my containers would focus on the logical names instead of implementation details like IDs. |
Some thoughts about naming, based on our experience at dotCloud... EC2 uses tags, and doesn't have a way to name instances (an instance will be
For those reasons, we decided that the naming scheme on the dotCloud platform would be different; i.e. for a given scaled service, each service instance would get an assigned number (0,1,2...) and uniqueness would be enforced. This, however, has other shortcomings.
I assume that we can't / don't want to ensure global uniqueness of container names (that would require some distributed naming system, and suddenly a herd of zookeepers, doozers, and other weird beasts are hammering to gates to get in!); however, some way to easily find a container "when the sh_it hits the fan" would be really great. Picture the following scenario: you need to stop (or enter) a specific instance of a given service, but your global container db (maintained by you, outside of docker) is down. Let's hope that you have some way to locate the docker host running the instance you're looking for. Now, how do you locate the specific container easily (=not with a 4-lines obscure shell pipeline that takes you 5 minutes to grok correctly through SSH), quickly (=not by running a command on each of the hundreds or thousands of containers running on the machine), and reliably (=not yielding 4 false positives of down or unrelated containers before pointing to the right one)? *Even if docker's structures have been corrupted/messed with?_ Any solution to the last problem gets my immediate buy-in :-) |
Hi Jerome, I'm not sure what you're asking or suggesting. Names willl be a convenience for single-machine use. They should not be On Friday, April 5, 2013, Jérôme Petazzoni wrote:
|
Sometimes global uniqueness is just scoping. Put the host machine's name in IMHO, we shouldn't try to address global uniqueness. It should be something In your scenario, I would solve it with a better service discovery system. On Fri, Apr 5, 2013 at 9:12 AM, Jérôme Petazzoni
Jeff Lindsay |
Well, I was merely explaining my use case (which is the use case Then there is a very wide spectrum of possible implementations.
Notes: |
Container tagging feels like a core concept to me, and seems like a building block for the "container groups" feature that @shykes mentioned. |
Naming containers worries me. Even if the feature were implemented, I think that if I found myself tempted to use it, I'd suspect a bad smell and try to redesign to avoid using it. Docker is powerful because it removes magic numbers from my life. It makes it possible for me to run, for example, two instances of nginx on my one host machine, without making a mess. Now what if I start working in a company that has a script that relies on one of their containers being named "nginx"? Or worse, "main"? Now try to set these up in jenkins or something that's trying to run selfcontained/concurrent tests. Whoops. Maybe it's possible to build scripts around naming to make sure names are generated with unique suffixes to avoid that kind of problem... but I'm not sure it's reasonable to expect people to consistently do so, and even if it is done, then is that any better than just dealing with random IDs as they stand? If there was support for commands that run on sets of containers based on globbing of names, then I could see a potential gain, but otherwise, I see nothing. |
@heavenlyhash so it sounds like you'd prefer tags. |
Tentatively scheduling for 0.8 |
Just a heads up, this is confirmed for release in 0.6.5 tomorrow :) |
This has been implemented in Docker 0.6.5. |
👍 |
Add graph driver registration
Add warning about SYS_BOOT capability with pre-3.4 kernels and pre-0.8 LXC.
add alilogs to 1.12.x
I had a CI run fail to "Upload reports": Exponential backoff for retry #1. Waiting for 4565 milliseconds before continuing the upload at offset 0 Finished backoff for retry #1, continuing with upload Total file count: 211 ---- Processed file #160 (75.8%) ... Total file count: 211 ---- Processed file #164 (77.7%) Total file count: 211 ---- Processed file #164 (77.7%) Total file count: 211 ---- Processed file #164 (77.7%) A 503 status code has been received, will attempt to retry the upload ##### Begin Diagnostic HTTP information ##### Status Code: 503 Status Message: Service Unavailable Header Information: { "content-length": "592", "content-type": "application/json; charset=utf-8", "date": "Mon, 21 Aug 2023 14:08:10 GMT", "server": "Kestrel", "cache-control": "no-store,no-cache", "pragma": "no-cache", "strict-transport-security": "max-age=2592000", "x-tfs-processid": "b2fc902c-011a-48be-858d-c62e9c397cb6", "activityid": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-tfs-session": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-vss-e2eid": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-vss-senderdeploymentid": "63be6134-28d1-8c82-e969-91f4e88fcdec", "x-frame-options": "SAMEORIGIN" } ###### End Diagnostic HTTP information ###### Retry limit has been reached for chunk at offset 0 to https://pipelinesghubeus5.actions.githubusercontent.com/Y2huPMnV2RyiTvKoReSyXTCrcRyxUdSDRZYoZr0ONBvpl5e9Nu/_apis/resources/Containers/8331549?itemPath=integration-reports%2Fubuntu-22.04-systemd%2Fbundles%2Ftest-integration%2FTestInfoRegistryMirrors%2Fd20ac12e48cea%2Fdocker.log Warning: Aborting upload for /tmp/reports/ubuntu-22.04-systemd/bundles/test-integration/TestInfoRegistryMirrors/d20ac12e48cea/docker.log due to failure Error: aborting artifact upload Total file count: 211 ---- Processed file #165 (78.1%) A 503 status code has been received, will attempt to retry the upload Exponential backoff for retry #1. Waiting for 5799 milliseconds before continuing the upload at offset 0 As a result, the "Download reports" continued retrying: ... Total file count: 1004 ---- Processed file #436 (43.4%) Total file count: 1004 ---- Processed file #436 (43.4%) Total file count: 1004 ---- Processed file #436 (43.4%) An error occurred while attempting to download a file Error: Request timeout: /Y2huPMnV2RyiTvKoReSyXTCrcRyxUdSDRZYoZr0ONBvpl5e9Nu/_apis/resources/Containers/8331549?itemPath=integration-reports%2Fubuntu-20.04%2Fbundles%2Ftest-integration%2FTestCreateWithDuplicateNetworkNames%2Fd47798cc212d1%2Fdocker.log at ClientRequest.<anonymous> (/home/runner/work/_actions/actions/download-artifact/v3/dist/index.js:3681:26) at Object.onceWrapper (node:events:627:28) at ClientRequest.emit (node:events:513:28) at TLSSocket.emitRequestTimeout (node:_http_client:839:9) at Object.onceWrapper (node:events:627:28) at TLSSocket.emit (node:events:525:35) at TLSSocket.Socket._onTimeout (node:net:550:8) at listOnTimeout (node:internal/timers:559:17) at processTimers (node:internal/timers:502:7) Exponential backoff for retry #1. Waiting for 5305 milliseconds before continuing the download Total file count: 1004 ---- Processed file #436 (43.4%) And, it looks like GitHub doesn't allow cancelling the job, possibly because it is defined with `if: always()`? Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I had a CI run fail to "Upload reports": Exponential backoff for retry #1. Waiting for 4565 milliseconds before continuing the upload at offset 0 Finished backoff for retry #1, continuing with upload Total file count: 211 ---- Processed file #160 (75.8%) ... Total file count: 211 ---- Processed file #164 (77.7%) Total file count: 211 ---- Processed file #164 (77.7%) Total file count: 211 ---- Processed file #164 (77.7%) A 503 status code has been received, will attempt to retry the upload ##### Begin Diagnostic HTTP information ##### Status Code: 503 Status Message: Service Unavailable Header Information: { "content-length": "592", "content-type": "application/json; charset=utf-8", "date": "Mon, 21 Aug 2023 14:08:10 GMT", "server": "Kestrel", "cache-control": "no-store,no-cache", "pragma": "no-cache", "strict-transport-security": "max-age=2592000", "x-tfs-processid": "b2fc902c-011a-48be-858d-c62e9c397cb6", "activityid": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-tfs-session": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-vss-e2eid": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-vss-senderdeploymentid": "63be6134-28d1-8c82-e969-91f4e88fcdec", "x-frame-options": "SAMEORIGIN" } ###### End Diagnostic HTTP information ###### Retry limit has been reached for chunk at offset 0 to https://pipelinesghubeus5.actions.githubusercontent.com/Y2huPMnV2RyiTvKoReSyXTCrcRyxUdSDRZYoZr0ONBvpl5e9Nu/_apis/resources/Containers/8331549?itemPath=integration-reports%2Fubuntu-22.04-systemd%2Fbundles%2Ftest-integration%2FTestInfoRegistryMirrors%2Fd20ac12e48cea%2Fdocker.log Warning: Aborting upload for /tmp/reports/ubuntu-22.04-systemd/bundles/test-integration/TestInfoRegistryMirrors/d20ac12e48cea/docker.log due to failure Error: aborting artifact upload Total file count: 211 ---- Processed file #165 (78.1%) A 503 status code has been received, will attempt to retry the upload Exponential backoff for retry #1. Waiting for 5799 milliseconds before continuing the upload at offset 0 As a result, the "Download reports" continued retrying: ... Total file count: 1004 ---- Processed file #436 (43.4%) Total file count: 1004 ---- Processed file #436 (43.4%) Total file count: 1004 ---- Processed file #436 (43.4%) An error occurred while attempting to download a file Error: Request timeout: /Y2huPMnV2RyiTvKoReSyXTCrcRyxUdSDRZYoZr0ONBvpl5e9Nu/_apis/resources/Containers/8331549?itemPath=integration-reports%2Fubuntu-20.04%2Fbundles%2Ftest-integration%2FTestCreateWithDuplicateNetworkNames%2Fd47798cc212d1%2Fdocker.log at ClientRequest.<anonymous> (/home/runner/work/_actions/actions/download-artifact/v3/dist/index.js:3681:26) at Object.onceWrapper (node:events:627:28) at ClientRequest.emit (node:events:513:28) at TLSSocket.emitRequestTimeout (node:_http_client:839:9) at Object.onceWrapper (node:events:627:28) at TLSSocket.emit (node:events:525:35) at TLSSocket.Socket._onTimeout (node:net:550:8) at listOnTimeout (node:internal/timers:559:17) at processTimers (node:internal/timers:502:7) Exponential backoff for retry #1. Waiting for 5305 milliseconds before continuing the download Total file count: 1004 ---- Processed file #436 (43.4%) And, it looks like GitHub doesn't allow cancelling the job, possibly because it is defined with `if: always()`? Signed-off-by: Sebastiaan van Stijn <github@gone.nl> (cherry picked from commit d6f340e) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I had a CI run fail to "Upload reports": Exponential backoff for retry #1. Waiting for 4565 milliseconds before continuing the upload at offset 0 Finished backoff for retry #1, continuing with upload Total file count: 211 ---- Processed file #160 (75.8%) ... Total file count: 211 ---- Processed file #164 (77.7%) Total file count: 211 ---- Processed file #164 (77.7%) Total file count: 211 ---- Processed file #164 (77.7%) A 503 status code has been received, will attempt to retry the upload ##### Begin Diagnostic HTTP information ##### Status Code: 503 Status Message: Service Unavailable Header Information: { "content-length": "592", "content-type": "application/json; charset=utf-8", "date": "Mon, 21 Aug 2023 14:08:10 GMT", "server": "Kestrel", "cache-control": "no-store,no-cache", "pragma": "no-cache", "strict-transport-security": "max-age=2592000", "x-tfs-processid": "b2fc902c-011a-48be-858d-c62e9c397cb6", "activityid": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-tfs-session": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-vss-e2eid": "49a48b53-0411-4ff3-86a7-4528e3f71ba2", "x-vss-senderdeploymentid": "63be6134-28d1-8c82-e969-91f4e88fcdec", "x-frame-options": "SAMEORIGIN" } ###### End Diagnostic HTTP information ###### Retry limit has been reached for chunk at offset 0 to https://pipelinesghubeus5.actions.githubusercontent.com/Y2huPMnV2RyiTvKoReSyXTCrcRyxUdSDRZYoZr0ONBvpl5e9Nu/_apis/resources/Containers/8331549?itemPath=integration-reports%2Fubuntu-22.04-systemd%2Fbundles%2Ftest-integration%2FTestInfoRegistryMirrors%2Fd20ac12e48cea%2Fdocker.log Warning: Aborting upload for /tmp/reports/ubuntu-22.04-systemd/bundles/test-integration/TestInfoRegistryMirrors/d20ac12e48cea/docker.log due to failure Error: aborting artifact upload Total file count: 211 ---- Processed file #165 (78.1%) A 503 status code has been received, will attempt to retry the upload Exponential backoff for retry #1. Waiting for 5799 milliseconds before continuing the upload at offset 0 As a result, the "Download reports" continued retrying: ... Total file count: 1004 ---- Processed file #436 (43.4%) Total file count: 1004 ---- Processed file #436 (43.4%) Total file count: 1004 ---- Processed file #436 (43.4%) An error occurred while attempting to download a file Error: Request timeout: /Y2huPMnV2RyiTvKoReSyXTCrcRyxUdSDRZYoZr0ONBvpl5e9Nu/_apis/resources/Containers/8331549?itemPath=integration-reports%2Fubuntu-20.04%2Fbundles%2Ftest-integration%2FTestCreateWithDuplicateNetworkNames%2Fd47798cc212d1%2Fdocker.log at ClientRequest.<anonymous> (/home/runner/work/_actions/actions/download-artifact/v3/dist/index.js:3681:26) at Object.onceWrapper (node:events:627:28) at ClientRequest.emit (node:events:513:28) at TLSSocket.emitRequestTimeout (node:_http_client:839:9) at Object.onceWrapper (node:events:627:28) at TLSSocket.emit (node:events:525:35) at TLSSocket.Socket._onTimeout (node:net:550:8) at listOnTimeout (node:internal/timers:559:17) at processTimers (node:internal/timers:502:7) Exponential backoff for retry #1. Waiting for 5305 milliseconds before continuing the download Total file count: 1004 ---- Processed file #436 (43.4%) And, it looks like GitHub doesn't allow cancelling the job, possibly because it is defined with `if: always()`? Signed-off-by: Sebastiaan van Stijn <github@gone.nl> (cherry picked from commit d6f340e) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
…f v1.5.4 full diffs: - protocolbuffers/protobuf-go@v1.31.0...v1.33.0 - golang/protobuf@v1.5.3...v1.5.4 From the Go security announcement list; > Version v1.33.0 of the google.golang.org/protobuf module fixes a bug in > the google.golang.org/protobuf/encoding/protojson package which could cause > the Unmarshal function to enter an infinite loop when handling some invalid > inputs. > > This condition could only occur when unmarshaling into a message which contains > a google.protobuf.Any value, or when the UnmarshalOptions.UnmarshalUnknown > option is set. Unmarshal now correctly returns an error when handling these > inputs. > > This is CVE-2024-24786. In a follow-up post; > A small correction: This vulnerability applies when the UnmarshalOptions.DiscardUnknown > option is set (as well as when unmarshaling into any message which contains a > google.protobuf.Any). There is no UnmarshalUnknown option. > > In addition, version 1.33.0 of google.golang.org/protobuf inadvertently > introduced an incompatibility with the older github.com/golang/protobuf > module. (golang/protobuf#1596) Users of the older > module should update to github.com/golang/protobuf@v1.5.4. govulncheck results in our code: govulncheck ./... Scanning your code and 1221 packages across 204 dependent modules for known vulnerabilities... === Symbol Results === Vulnerability #1: GO-2024-2611 Infinite loop in JSON unmarshaling in google.golang.org/protobuf More info: https://pkg.go.dev/vuln/GO-2024-2611 Module: google.golang.org/protobuf Found in: google.golang.org/protobuf@v1.31.0 Fixed in: google.golang.org/protobuf@v1.33.0 Example traces found: #1: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Peek #2: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Read #3: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls protojson.Unmarshal Your code is affected by 1 vulnerability from 1 module. This scan found no other vulnerabilities in packages you import or modules you require. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
…f v1.5.4 full diffs: - protocolbuffers/protobuf-go@v1.31.0...v1.33.0 - golang/protobuf@v1.5.3...v1.5.4 From the Go security announcement list; > Version v1.33.0 of the google.golang.org/protobuf module fixes a bug in > the google.golang.org/protobuf/encoding/protojson package which could cause > the Unmarshal function to enter an infinite loop when handling some invalid > inputs. > > This condition could only occur when unmarshaling into a message which contains > a google.protobuf.Any value, or when the UnmarshalOptions.UnmarshalUnknown > option is set. Unmarshal now correctly returns an error when handling these > inputs. > > This is CVE-2024-24786. In a follow-up post; > A small correction: This vulnerability applies when the UnmarshalOptions.DiscardUnknown > option is set (as well as when unmarshaling into any message which contains a > google.protobuf.Any). There is no UnmarshalUnknown option. > > In addition, version 1.33.0 of google.golang.org/protobuf inadvertently > introduced an incompatibility with the older github.com/golang/protobuf > module. (golang/protobuf#1596) Users of the older > module should update to github.com/golang/protobuf@v1.5.4. govulncheck results in our code: govulncheck ./... Scanning your code and 1221 packages across 204 dependent modules for known vulnerabilities... === Symbol Results === Vulnerability #1: GO-2024-2611 Infinite loop in JSON unmarshaling in google.golang.org/protobuf More info: https://pkg.go.dev/vuln/GO-2024-2611 Module: google.golang.org/protobuf Found in: google.golang.org/protobuf@v1.31.0 Fixed in: google.golang.org/protobuf@v1.33.0 Example traces found: #1: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Peek #2: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Read #3: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls protojson.Unmarshal Your code is affected by 1 vulnerability from 1 module. This scan found no other vulnerabilities in packages you import or modules you require. Signed-off-by: Sebastiaan van Stijn <github@gone.nl> (cherry picked from commit 1ca89d7) Signed-off-by: Austin Vazquez <macedonv@amazon.com>
…protobuf to v1.33.0 These vulnerabilities were found by govulncheck: Vulnerability moby#1: GO-2024-2611 Infinite loop in JSON unmarshaling in google.golang.org/protobuf More info: https://pkg.go.dev/vuln/GO-2024-2611 Module: google.golang.org/protobuf Found in: google.golang.org/protobuf@v1.28.1 Fixed in: google.golang.org/protobuf@v1.33.0 Example traces found: moby#1: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Peek moby#2: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Read moby#3: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls protojson.Unmarshal Vulnerability moby#2: GO-2023-2153 Denial of service from HTTP/2 Rapid Reset in google.golang.org/grpc More info: https://pkg.go.dev/vuln/GO-2023-2153 Module: google.golang.org/grpc Found in: google.golang.org/grpc@v1.50.1 Fixed in: google.golang.org/grpc@v1.56.3 Example traces found: moby#1: api/server/router/grpc/grpc.go:20:29: grpc.NewRouter calls grpc.NewServer moby#2: daemon/daemon.go:1477:23: daemon.Daemon.RawSysInfo calls sync.Once.Do, which eventually calls grpc.Server.Serve moby#3: daemon/daemon.go:1477:23: daemon.Daemon.RawSysInfo calls sync.Once.Do, which eventually calls transport.NewServerTransport full diffs: - https://github.com/grpc/grpc-go/compare/v1.50.1..v1.56.3 - https://github.com/protocolbuffers/protobuf-go/compare/v1.28.1..v1.33.0 - https://github.com/googleapis/google-api-go-client/compare/v0.93.0..v0.114.0 - https://github.com/golang/oauth2/compare/v0.1.0..v0.7.0 - https://github.com/census-instrumentation/opencensus-go/compare/v0.23.0..v0.24.0 - https://github.com/googleapis/gax-go/compare/v2.4.0..v2.7.1 - https://github.com/googleapis/enterprise-certificate-proxy/compare/v0.1.0..v0.2.3 - https://github.com/golang/protobuf/compare/v1.5.2..v1.5.4 - https://github.com/cespare/xxhash/compare/v2.1.2..v2.2.0 - https://github.com/googleapis/google-cloud-go/compare/v0.102.1..v0.110.0 - https://github.com/googleapis/go-genproto v0.0.0-20230410155749-daa745c078e1 - https://github.com/googleapis/google-cloud-go/compare/logging/v1.4.2..logging/v1.7.0 - https://github.com/googleapis/google-cloud-go/compare/compute/v1.7.0..compute/v1.19.1 Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
api: Make EnableIPv6 optional (impl #1 - pointer-based)
contains a fix for CVE-2024-45338 / https://go.dev/issue/70906, but it doesn't affect our codebase: govulncheck -show=verbose ./... Scanning your code and 1260 packages across 211 dependent modules for known vulnerabilities... ... Vulnerability #1: GO-2024-3333 Non-linear parsing of case-insensitive content in golang.org/x/net/html More info: https://pkg.go.dev/vuln/GO-2024-3333 Module: golang.org/x/net Found in: golang.org/x/net@v0.32.0 Fixed in: golang.org/x/net@v0.33.0 Your code is affected by 0 vulnerabilities. This scan also found 0 vulnerabilities in packages you import and 1 vulnerability in modules you require, but your code doesn't appear to call these vulnerabilities. full diff: golang/net@v0.32.0...v0.33.0 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
In previous conversations I think we agreed that the ability to tag containers would be nice, but there was no compelling reason to add it to the core. Especially with globally unique IDs, it's super easy for a user to store all the metadata he needs himself: users, applications, services, versions, source repository, source tag, whatever.
However there is one thing that is only possible in the core: atomic operations on a set of containers matching certain tags. In the future that might be a necessary feature, for a number of reasons:
a) Performance (to avoid running 200 duplicate commands for 200 containers)
b) Reliability (eg. less moving parts when coordinating many dockers)
c) Ease of development
I don't have a set opinion, but wanted to write this down for later discussion.
The text was updated successfully, but these errors were encountered: