Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Devfile Deployment support #2471

Closed
8 tasks done
rajivnathan opened this issue Dec 17, 2019 · 22 comments · Fixed by #2589
Closed
8 tasks done

Implement Devfile Deployment support #2471

rajivnathan opened this issue Dec 17, 2019 · 22 comments · Fixed by #2589
Assignees
Labels
area/devfile-spec Issues or PRs related to the Devfile specification and how odo handles and interprets it. kind/feature Categorizes issue as a feature request. For PRs, that means that the PR is the implementation

Comments

@rajivnathan
Copy link
Contributor

rajivnathan commented Dec 17, 2019

Related to #2470

Description

The purpose of this issue is to develop functionality that accepts a Devfile and creates a running representation of its components that can be used for development of a project. This is the first requirement for the devfile push command (#2470)

High level design

  • Use the Devfile Parser (Implement devfile parser #2477) to deserialize the Devfile into a DevfileObj
  • Define a DevfileAdapter interface that describes functions that can be used to create representations of the Devfile according to the platform of the DevfileAdapter
    • Initial DevfileAdapter interface can include Start and Stop functions (Start to be used by push command and Stop to be used by delete command)
    • Platform examples: Kubernetes, Docker
  • Implement a Kubernetes DevfileAdapter that accepts a DevfileObj as context
    • Implement a Start function that creates a Deployment that contains a Pod with a container for each of the supported components
    • Only components of type dockerimage are considered at the moment

Here is an example snippet of the components section of a Devfile:

components:
  -
    type: chePlugin
    id: redhat/java/latest
    memoryLimit: 1512Mi
  -
    type: dockerimage
    image: websphere-liberty:19.0.0.3-webProfile7
    alias: runtime
    env:
      - name: MAVEN_CONFIG  
        value: ""
    memoryLimit: 768Mi
    endpoints:
      - name: '8080/tcp'
        port: 8080
    mountSources: true
    volumes:
      - name: m2
        containerPath: /home/user/.m2runtime
  -
    type: dockerimage
    alias: tools
    image: quay.io/eclipse/che-java8-maven:nightly
    env:
      - name: MAVEN_CONFIG  
        value: ""
      - name: JAVA_OPTS
        value: "-XX:MaxRAMPercentage=50.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10
          -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90
          -Dsun.zip.disableMemoryMapping=true -Xms20m -Djava.security.egd=file:/dev/./urandom
          -Duser.home=/home/user"
      - name: MAVEN_OPTS
        value: $(JAVA_OPTS)
    memoryLimit: 768Mi
    endpoints:
      - name: '8080/tcp'
        port: 8080
    mountSources: true
    volumes:
      - name: m2
        containerPath: /home/user/.m2tools

Given a Devfile with a components sections similar to this, the Kubernetes DevfileAdapter Start function should create a Deployment that has a Pod with 2 containers:

  1. websphere-liberty container
  2. che-java8-maven container

Here's a mockup of what the Deployment yaml would look like:

apiVersion: v1
kind: Pod
metadata:
  labels:
    deployment: websphere-liberty-fjza-app-1
  name: deployment: websphere-liberty-fjza-app-1-vxj4f
  namespace: myproject
  ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicationController
      name: websphere-liberty-fjza-app-1
      uid: d3d6ec84-46d6-11ea-9afc-00000a1f025e
  resourceVersion: "52777902"
  selfLink: /api/v1/namespaces/che/pods/workspacedjgqeawmqh35ui0w.runtime-8f8599988-mk26f
  uid: ed9adf69-46cf-11ea-9afc-00000a1f025e
spec:
  automountServiceAccountToken: true
  containers:
  - args:
    - -f
    - /dev/null
    command:
    - tail
    image: websphere-liberty:19.0.0.3-webProfile7
    imagePullPolicy: Always
    name: runtime
    ports:
    - containerPort: 3000
      protocol: TCP
    resources:
      limits:
        memory: "805306368"
      requests:
        memory: "805306368"
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  - image: quay.io/eclipse/che-java8-maven:nightly
    imagePullPolicy: IfNotPresent
    name: tools
    ports:
    - containerPort: 8080
      protocol: TCP
    resources:
      limits:
        memory: "805306368"
      requests:
        memory: "805306368"
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File

Implementation Plan

Acceptance Criteria

  • It should create a Deployment based on the Devfile
  • It should create Container specs based on the Devfile component values (eg. memoryLimit, environment variables, etc.)
  • It should not create a Deployment if one already exists
@rajivnathan rajivnathan changed the title Implement create Pod and Deployment for Devfile support Implement Devfile Deployment support Dec 17, 2019
@kadel
Copy link
Member

kadel commented Dec 18, 2019

/area devfile

@openshift-ci-robot openshift-ci-robot added the area/devfile-spec Issues or PRs related to the Devfile specification and how odo handles and interprets it. label Dec 18, 2019
@girishramnani girishramnani added the kind/feature Categorizes issue as a feature request. For PRs, that means that the PR is the implementation label Jan 6, 2020
@rajivnathan
Copy link
Contributor Author

/assign @rajivnathan

@kadel
Copy link
Member

kadel commented Feb 3, 2020

How is Deployment going to look like?
Are we going to take a "fat" Pod approach?
Are we going to reuse the SupervisorD to ensure that the application can be restarted without restarting the whole Pod?

It would be nice to show take some example Devfile.yaml and show how will the Deployment will be produced look like.

@rajivnathan
Copy link
Contributor Author

@kadel good questions. Yes, we are taking the "fat" Pod approach. The example I pointed out in the High Level Design section mentions it will create a Deployment that has a Pod with 2 containers. I will update that to show what the Deployment will look like to make it more clear.

Re: SupervisorD, we can but if we can override the entrypoint such that the container will not exit (eg. tail -f /dev/null) then scripts can be used to start/stop the runtime within the container and remove the requirement that a runtime process must be running at all times? However, if we do override the entrypoint we should probably have a way for users to change that in the Devfile. That way a user can still use the SupervisorD approach if they wish.

I think there are benefits and drawbacks to both approaches.

SupervisorD approach:

  • viewing the container logs will show the output of the script that SupervisorD is running which is likely the output the user wants to see
  • requires access to the init image: registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:1.0.2 or a locally available copy of it
  • would this work for all images eg. alpine, etc.?

Override entrypoint approach:

  • can view output of individual commands, but container logs are empty by default (can investigate whether we can do something better)
  • more flexible in terms of images we can support (doesn't require scripts to be in specific places)

What do you think?

@gorkem
Copy link
Contributor

gorkem commented Feb 4, 2020

@kadel I think this is a good time to reconsider SupervisorD. I am not sure if it is awfully useful in this new direction. Devfile already provides a way to override entrypoints. I would rather have us depend on the devfile and stack declarations instead of trying to hack a generic enough solution on top of SupervisorD.

@sspeiche
Copy link
Contributor

sspeiche commented Feb 4, 2020

Agree with @gorkem, we should feature the hot redeploy features of the high priority runtimes and could use devfiles/stacks to do that. If we need to provide a default approach, perhaps SupervisorD could play a role.

@kadel
Copy link
Member

kadel commented Feb 4, 2020

I agree that SupervisorD complicates things quite a bit.
But one big benefit that SupervisorD like approach brings is the ability to restart the application without restarting the whole Pod. Not sure if we will be able to do this without something like SupervisorD.

@cmoulliard
Copy link
Contributor

But one big benefit that SupervisorD like approach brings is the ability to restart the application without restarting the whole Pod

Is it something that the end users need, expect to have ? What I would like to say here is that when we use the Dev's pod created using supervisord or a container using as entrypoint the trick - tail -f /dev/null, they understand that they will interact with a pod where they will push, build, debug, test or run their runtime.

From this point of view, the CLI tool itself can execute remotely the command to restart the application post build step if the application could be started, .... That command could be executed using the supervisord ctl client or simply kubectl exec.

See by example what we tested here for quarkus : https://github.com/halkyonio/container-images/tree/master/sandbox/hal2#compile

@cmoulliard
Copy link
Contributor

How is Deployment going to look like?

The deployment could be as simple as this example : https://github.com/halkyonio/container-images/blob/master/sandbox/hal2/deploy/05-dc-b.yml#L18-L41

Next, odo tool could use the commands defined within the DevFile to execute remotely the commands

Example

  • Java build -> mvn package -Dlocal.maven.repo=/volume/mounted/to/store/maven/repo
  • Run java application -> mvn quarkus:dev | java -jar target/my.jar

@maxandersen
Copy link
Contributor

+1 on removing or at least make supervisord approach 100% optional as it is only relelvant if you are trying to work with absolute lowest common denominator of images with zero enablement of a good developer experience.

odo/devfile could even go further and not even require (but stil support) explicit comands like this in the devfile but get info from labels/annoations on the container/pods to drive it.

then the devfiles can be very small/concise and their behavior controlled (as a default, that can be overridden) by what is actually deployed.

@gorkem
Copy link
Contributor

gorkem commented Feb 4, 2020

@maxandersen labels/annotations is also something I have on my mind. I suppose we should have it in odo repo as a separate issue too but I want to build those into the specification of what stacks are.

@sspeiche
Copy link
Contributor

sspeiche commented Feb 4, 2020

@cmoulliard right, we should use the specialized support in the runtimes.

The generic "we don't have a built in way", then the SupervisorD could be supplied. If there are annotations or ENV vars (like ODO and Appsody) or labels/tags, could work as well

@cmoulliard
Copy link
Contributor

ENV vars (like ODO and Appsody) or labels/tags

I dont see why we need k8s labels or even k8s ENV vars as the commands to be executed remotely should be passed to the tool (odo, hal, ...) or the framework (jshift, ....) communicating with the Dev's pod started using the trick tail -f /dev/null

@sspeiche
Copy link
Contributor

sspeiche commented Feb 5, 2020

The question is how does the tool know what commands to run. The idea is to have the basic flow be:

  1. If devfile or stack definition yaml, use it
  2. If image doesn't have devfile/stack, look for something from the runtime/builder image provided (tag or ENV var?)
  3. fall back to supervisord or some other default approach.

Are we not saying the same thing here? I could see a case where #2 was about a tool that was for a specific runtime (nodeshift) and it just orchestrated the right things.

@cmoulliard
Copy link
Contributor

If you also integrates our Runtime CRD which interests too IBM Appsody ..., then you could also define your flow as such :

  • If devfile or stack definition yaml, use it
  • if Runtime CRD created for the component - runtime (e.g : quarkus, spring boot, vert.x, thorntail, nodejs), then select the fields:
    • commands: to know for the different operations: run/build/test/debug, what should be the command to be executed remotely (oc exec ....)
    • image: from image to be used to create the Dev's pod, packaging the tool needed by the runtime (gradle, maven)
    • toImage: image name to be created if the outerloop is executed using : JIB, Tekton, ...
  • If no devfile or runtime is defined bu just an image , look for something from the runtime/builder image provided (labels)
  • fall back to supervisord* or some other default approach.

*: the supervisord config which includes the commands to be executed too could be created from the different resources we described here.

@metacosm @sspeiche @arthurdm

@arthurdm
Copy link
Member

arthurdm commented Feb 5, 2020

I believe that odo projects should be given a choice to create (from a template) a YAML file that leverages a particular runtime Operator to carry on the actual deployment, instead of just the raw k8s Kind: Deployment.

The advantage is the set of capabilities you now have access to, with a much lower knowledge bar. For example: how to setup auto-scaling, knative deployments, certificate management interaction, service binding, peer recovery, SSO inteegration, etc.

Plus also the benefits of day-2 operations being available, such as these ones from the Open Liberty's Operator.

This is the approach we took with Application Stacks in the Appsody project. Each stack has a default Operator (Appsody Operator) for the deployment phase, and we're now working on being able to configure that per stack / project. The Application Runtimes Operator would be a good candidate for a default Operator.

@cmoulliard
Copy link
Contributor

The advantage is the set of capabilities you now have access to, with a much lower knowledge bar. For example: how to setup auto-scaling, knative deployments, certificate management interaction, service binding, peer recovery, SSO inteegration, etc.

This is exactly what the tool hal is doing as by example the interactive mode will ask some questions to the end user: runtimer/version, capability, isService (=> Openshift route or ingress) ... that we will use to populate the Component CRD

hal component create quarkus-rest-1 
? Runtime quarkus
? Version 1.1.1.Final
? Expose microservice Yes
? Port 8080
? Use code generator (Y/n) n
...

@kadel
Copy link
Member

kadel commented Feb 19, 2020

I believe that odo projects should be given a choice to create (from a template) a YAML file that leverages a particular runtime Operator to carry on the actual deployment, instead of just the raw k8s Kind: Deployment.

We can expand it later. Currently, it is more important to get it working with regular Deployment as that is what will be most widely used.

@kadel
Copy link
Member

kadel commented Feb 20, 2020

@rajivnathan Do you have an idea how are we going to handle the application starts and restarts, and triggering commands defined in devfile in general?

@rajivnathan
Copy link
Contributor Author

@kadel Do you mean you're worried about the container exiting? Since by default we will override the entrypoint to something that won't cause the container to exit then commands can be used to start/stop the application. If a devfile writer decides to use their own entrypoint then that entrypoint would need to ensure the container stays up. However, that may be a less than ideal user experience. Maybe we can do something like detecting whether a container has exited when a push is beginning and restart the pod?

As for execution of the actual commands I think we discussed executing commands that are referenced by the push command? eg. something like odo push --command="build" --command="run"?

@kadel
Copy link
Member

kadel commented Feb 25, 2020

@kadel Do you mean you're worried about the container exiting? Since by default we will override the entrypoint to something that won't cause the container to exit then commands can be used to start/stop the application. If a devfile writer decides to use their own entrypoint then that entrypoint would need to ensure the container stays up. However, that may be a less than ideal user experience. Maybe we can do something like detecting whether a container has exited when a push is beginning and restart the pod?

I mean something a little bit different.
odo push has multiple stages. For the first time push:

  1. create the Deployment based on information in Devfile
  2. upload the source code into the container (volume)
  3. build the source code in the container (execute the build command as defined in Devfile)
  4. run the code that has been build in previous step.

step 3) is easy as all we need to do is equivalent of kubectl exec
step 4) is problematic, as the application process needs to stay running even after odo "disconnects" from the container or cluster. It needs to be forked to the background. So we can't use the same mechanism as for "build" commad Commands for running applications in the Devfile are usually the long-running processes (no daemon or fork to background). So the process execution will be tied to the API call that is equivalent to kubectl exec.

then for the second time push it will be

  1. updating the Deployment if needed (if Devfile changed)
  2. uploading the changed source code
  3. trigger build again
  4. restart the application without restarting the container

To do step 4) properly we will have to be able to identify the correct process. For example, odo will have to know PID. Or have some other way to control the running process

@rajivnathan
Copy link
Contributor Author

@kadel That's going to be tricky. I don't think we have a good solution for making it work for existing Devfiles that have commands that use long-running processes (no daemon or fork to background). While doing the POC with IDPs we used nohup for commands that needed to continue to run in the background. This worked when I tested it in Che. When I ran the command in Che without nohup then when I closed the command output in Che (effectively terminating the shell) the command process also terminated but after modifying the script for the command to use nohup then I was able to close the command in Che but leave the server running. But for that to work we would require the stack owner to ensure they do the same but that's not a great experience for stack creators. It would be better if we could do something at the odo level like run the command with nohup to ensure it can continue in the background. I suppose we could try to wrap any command with nohup if it's not already using it but I'm not sure if there would be problems with that approach. Any other ideas?

As for update, wouldn't it be the stack that should know how to handle updates correctly? In some cases it's not necessary because it's built into the runtime. eg. In the case of liberty stop/start commands can be used and it tracks the PID on its own. In some cases we don't want to restart the process if it isn't required because the runtime supports hot deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/devfile-spec Issues or PRs related to the Devfile specification and how odo handles and interprets it. kind/feature Categorizes issue as a feature request. For PRs, that means that the PR is the implementation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants