Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User contributable container features #8

Closed
5 of 10 tasks
chrmarti opened this issue Feb 1, 2022 · 25 comments
Closed
5 of 10 tasks

User contributable container features #8

chrmarti opened this issue Feb 1, 2022 · 25 comments
Assignees

Comments

@chrmarti
Copy link
Contributor

chrmarti commented Feb 1, 2022

Collecting the remaining tasks here.

Required for "self-hosting" built-in features:

  • Cache feature downloads in the 'persistent folder'. (Discussed a lock-less approach with @joshspicer to that.)
  • Use baked-in versions initially, but also check for updates. Requires versioning scheme to avoid having old versions of the implementation update to feature versions using newly introduced functionality.

Required for finalizing the devcontainer-features.json:

  • Revisit include/exclude: These can currently only target the definition id as stored on a Docker image label. (The original definition id is not stored in the devcontainer.json.) Should be investigate reading /etc/os-release to determine compatibility?
  • Move settings/extensions properties to VS Code-property. Similar to #1.
  • Remove old properties like buildArg.
  • Decide on whether using a single env file is good enough or split it up per feature. (Affects layer cache.)
  • devcontainer features: Dependency Management #16
    - How do we handle an inevitable dependency graph?
    - eg 1: In the kitchensink image, we'll perhaps want to install node 14 and node 16. How do we specify "installing a feature twice", which one ends up on the $PATH, etc...
    - eg 2: Installing two features, where the installation of A depends on installation of B (JupyterLabs and python).
  • Naming volumes (see here)

User easy-of-adoptability

Related: #7

/cc @2percentsilk @bamurtaugh

@Chuxel
Copy link
Member

Chuxel commented Feb 3, 2022

These are good. Recent discussions bring a couple more to mind:

  • Entrypoints - Ideally we'd be able to bake these into the image so that "docker run" gets them, and the original entrypoint is still fired when overrideCommand != true (which is related wanting this to be able to be PID 1). Whatever design we land with, we'd want to describe the behavior of the property here for reference.
  • Finalize and document what should happen when a feature is run twice. (e.g. once pre-built, the other at runtime, or simply because you reused an image and didn't know it had been run)

There's also a few others that we can probably create separate proposals for (I think):

  • Embedding metadata about executed features as labels on pre-built images. Right now you have to re-reference a feature in devcontainer.json even when it is pre-built into the image. This actually causes the double-run problem, and could basically be avoided by adding a label with the contents of the features property used to pre-build. It's pretty common to put json in labels from what I have seen.
  • Whether we intend to support features for multi-container scenarios, and if so, what that looks like. (The existing Docker Compose pattern would only build in features for one of the containers... unless you pre-built them all)
  • Whether we want to support automatic selection of a pre-built version of an image so you do not have to maintain separate devcontainer.json files.

That said, embedding metadata could be our answer to the double run issue and would reduce the duplicate info in separate devcontainer.json files for pre-build verses run - so that could end up being the proposed solution for these.

@joshspicer
Copy link
Member

Order of how the tooling installs a feature should be documented (right now it's "no obvious order"). We've talked about this before and perhaps should discuss again if this is something we want to enforce in some layer (whether it be the tooling, the declaration in the .json file, not at all?)

@chrmarti
Copy link
Contributor Author

chrmarti commented Feb 3, 2022

When the order doesn't matter, we could use it to improve performance, e.g., when adding a feature, we could try to leverage the cached image from a previous build containing all but the added feature and build a new image containing the feature from that.

@Chuxel
Copy link
Member

Chuxel commented Feb 3, 2022

One other one I just remembered is unique names for volumes. This is useful for devcontaienr.json, but is a bit of a must-have for features given you can't change the volume name. (This is microsoft/vscode-remote-release#5679)

@joshspicer
Copy link
Member

When the order doesn't matter, we could use it to improve performance, e.g., when adding a feature, we could try to leverage the cached image from a previous build containing all but the added feature and build a new image containing the feature from that.

Good point

@joshspicer
Copy link
Member

updated the issue description with additional goals

@Chuxel
Copy link
Member

Chuxel commented Feb 15, 2022

I raised #10 and referenced the question on multi-container setup for features there - which also covers the embedded idea.

@stuartleeks
Copy link

For volumes, what about adding a new variable type? I.e. along with things like ${localEnv:SOME_VAR}, allowing ${devcontainer:name}?

Then the volume spec in a feature could be something like:

 "mounts" : [
	{
		"source":"${devcontainer:name}-dind-var-lib-docker", 
		"target":"/var/lib/docker", 
		"type":"volume"
	}
 ]

@chrmarti
Copy link
Contributor Author

For volumes, what about adding a new variable type? I.e. along with things like ${localEnv:SOME_VAR}, allowing ${devcontainer:name}?

The user might not be aware of the devcontainer name being used for the volume name. We could use something like the image name we compute from the local folder path. (Related: microsoft/vscode-remote-release#5679.)

@stuartleeks
Copy link

@chrmarti - that sounds good to me :-)

It would have the unique-per-dev-container property and also be resilient to name changes 👍

@joshspicer
Copy link
Member

Split #16 into its own issue, as I think it will need additional discussion.

@rchaganti
Copy link

Not sure if this was discussed already but I am looking for ability to install specific version of extensions within the features. For example, I created a Azure Bicep feature that is capable of installing a specified version of the CLI. However, there is no capability (either in devcontainer-features.json or devcontainer.json) to specify a version of the extension. Azure Bicep CLI and VS Code extension go hand in hand. So, it is important that we match those versions.

I tried downloading the VSIX and then extracting it to the VSCode Server folder but that did not work. May be this is something we can consider as a change in general?

@chrmarti
Copy link
Contributor Author

@rchaganti Would it make sense for the extension to support multiple CLI versions? When a user installs the extension from the marketplace they always get the latest version independently of which version of the CLI they have installed.

@Chuxel
Copy link
Member

Chuxel commented Mar 22, 2022

  • Entrypoints - Ideally we'd be able to bake these into the image so that "docker run" gets them, and the original entrypoint is still fired when overrideCommand != true (which is related wanting this to be able to be PID 1). Whatever design we land with, we'd want to describe the behavior of the property here for reference.

Broke this particular bit out into a separate proposal (#19).

@rchaganti
Copy link

@chrmarti Yes it is possible that an extension may support any previous versions of CLI for backward compatibility. At the same time, there may be breaking changes in an updated version of the extension that no longer work with a previous version of CLI.

@metaskills
Copy link

Should be investigate reading /etc/os-release to determine compatibility?

I was trying to make a "common-amzn" feature using an include like this below. But seems this did not work as expected. Did I do something wrong or is this bullet meant to address a feature like this working in the future? See error I got below which suggested I got the /etc/os-release id correct.

{
   "id": "common-amzn",
  "name": "Install common tools for Amazon Linux 2",
  "include": ["amzn"]
}
#7 [3/4] RUN cd /tmp/build-features/local-cache && chmod +x ./install.sh && ./install.sh
#7 sha256:6fb15a4ba4aead9638b0c2c04bb9d786bc96688c47a1b4ce23ebb553256c0e3e
#7 1.165 
#7 1.165 *********** Unsupported operating system "amzn" detected ***********
#7 1.165 
#7 1.165 Features support currently requires a Debian/Ubuntu-based image. Update your 
#7 1.165 image or Dockerfile FROM statement to start with a supported OS. For example:
#7 1.165 mcr.microsoft.com/vscode/devcontainers/base:ubuntu

@metaskills
Copy link

@joshspicer
Copy link
Member

@metaskills Today, all of the features we offer are designed to be built on top of our published ubuntu base image (although we expect it should work on most debian-based images).

This operating system check is done against whatever base image the devcontainer is trying to install features on top of.

Also, the "include" in this case is referring to the set of images we publish (mapping to the sub-directories here, and act only as hints for the Remote-Containers UX. You can omit "include" for your purposes (@chrmarti, we should look into removing/updating includes and exclude?)

What is the output of your /etc/os-release for the base image you're using (or if you can, share the image information/dockerfile directly)?

@metaskills
Copy link

Also, the "include" in this case is referring to the set of images we publish

Yes, but I had to use amzn here or it would not run at all. See details on /etc/os-release below for why.

What is the output of your /etc/os-release for the base image you're using

Using AWS Lambda images which comes in runtime/build variants. An example would be public.ecr.aws/sam/build-nodejs16.x and the output is below which is why you see the Unsupported operating system "amzn" detected message above given the ID. If it helps, I've also found issues with using Amazon Linux 2 in other places like the the /workspaces/.codespaces/.persistedshare/installSSH.sh but not mentioning them too deeply here since this is a features discussion. But did want to share.

NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
VARIANT_ID="202204251638-2.0.799.0"

@metaskills
Copy link

FYI, I've had to resort to executing a shell script and copied common-debian.sh and tweaked it for Amazon Linux 2 here. You may find it interesting. https://github.com/customink/codespaces-features

@Chuxel
Copy link
Member

Chuxel commented Jun 30, 2022

Given there is now a dev container features proposal, I'd propose we close this out in favor of other issues. I think we can raise one tracking feedback on the current proposal -- this is a bit of a task list instead. Any objections?

The architecture and OS point we've got captured as #58 - totally think that makes sense.

@bamurtaugh
Copy link
Member

Makes sense to me!

@edgonmsft
Copy link
Contributor

Yeah I think it can be closed because of that one and this one that still needs to be discussed:
#40

@Chuxel
Copy link
Member

Chuxel commented Jun 30, 2022

Yeah #40 ties to #7 as well. Tweaked the description there since its roughly in the same spirit and mentioned #40.

@Chuxel Chuxel closed this as completed Jun 30, 2022
@Chuxel
Copy link
Member

Chuxel commented Jun 30, 2022

Opened #61 as a spot for comments and also mentioned other existing issues spawned out of the core proposal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants