-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run unit tests as part of build flow #6758
Comments
@bparees @mfojtik @PI-Victor @smarterclayton @liggitt @deads2k I'd like to take the API discussion from #6715 to this issue, and also include the @openshift/ui-review team, since to effectively achieve our objective of making it easy to include unit tests as part of a build, UI work will have to be done, as that's what end users will mostly interact with. |
I don't see unit tests as part of a build. I see the build completing, pushing to its image stream, then having that trigger a series of That flow fits with most systems I've worked with. The build and its output is considered successful regardless of test passes. Then a series of different tests are used to vet the build, adding information until various boundaries are passed that allow promotion to addition test activity. You could even use |
@deads2k that ship has sailed (during extensive discussions about what this feature/card is for) so please don't derail this discussion. The feature is: |
It should be clear from something in the API response for the build that the reason for the build failure was the unit tests step, and not that the build itself failed, preferably not just a block of status details text, I'm thinking a "reason" field? You may want to visually flag these failures differently in the UI. |
@jwforres 👍 I'd love to see builds that failed unit tests "colored" differently -- there are a few "new" cases to consider:
|
@rhcarvalho are you going to update the build annotation to deliver the message to UI? |
@rhcarvalho now who's feature bloating..... |
So we won't limit their execution time, but should we allow them to set a "kill my build unit tests if they take longer than X amount of time". Only reason would be that quota-ed users may not want to block their whole build queue with one borked test run. Obviously they could do this themselves in their test commands, but having a guarantee from the system that they will get killed can be comforting 😄 |
Fortunately it already exists, we'd just need to add more possible values: origin/pkg/build/api/v1/types.go Line 61 in 2670ca0
|
@rhcarvalho i'm not really clear on the point of this issue. If you want to drive the discussion about what the api for defining the test-cmd should be, let's do that in a specific discussion, not one that revisits the entire feature. (The UX discussion is a fine one to have, but that doesn't seem like your most pressing concern for getting the card done, right?) |
@jwforres there's already |
yeah @rhcarvalho we already have cancel build in the UI, this was from the perspective of my build queue is running overnight with no one watching it so nothing was able to finish. But yep that's fine, was just a suggestion. |
@bparees +1 |
So, when we started to talk about container API, and modelling this after ExecNewPodHook (Deployment Config lifecycle hooks), I think things started to look more complex then they need to be, based on the scope originally discussed. We have previously aborted Build lifecycle hooks #3674, which were modeled after DC hooks. The implementation was exactly in that direction -- generic fields that let you run any image with any argument, entrypoint, volumes, env, etc, etc. Our next step was to target something more focused -- Start builds after a build completes #6311. It have clearly a much more focused scope and all the API suggestions targeted solving a single problem. We put there the elements that we needed, nothing more, and still kept in mind we might add more things in the future. And eventually we aborted it. This is the third try at solving a very particular use case. I wouldn't be surprised that at this time our implementation is even less generic, but made to solve one problem very well and leaving room for additive changes. All CI tools we looked at give you a single free-form string field that is run as a bash script. I've also seen people messing up with passing string arrays to cmd and entrypoint. Sure we could do that, but why complicate? |
apiVersion: v1
kind: BuildConfig
metadata:
name: example
namespace: demo
spec:
output:
to:
kind: ImageStreamTag
name: example:latest
resources: {}
source:
git:
uri: https://github.com/openshift/example
type: Git
strategy:
dockerStrategy: {}
type: Docker
# postCommit: specify commands to be run in the output Docker image after its
# last layer is committed and before the image is pushed to a registry. Use
# this to run unit tests.
postCommit:
# runBash: most people will be fine with running their unit tests within a
# Bash script, as that's what they are used to in tools they use today. Bash
# scripts are flexible, and can be simple and powerful. While it is easy to
# call 'rake test', you're not limited in any way to perform more advanced
# actions if need be.
runBash: rake test
# run: if there really are interesting user-driven use cases where the image
# entrypoint needs to be preserved, or that the user needs to explicitly
# specify a custom entrypoint, then she uses "run" and not "runBash".
run:
# entrypoint: if omitted, the image's entrypoint will be preserved. Maps
# directly to the entrypoint option when starting a Docker container.
entrypoint: ["custom", "value"]
# cmd: maps directly to the cmd option when starting a Docker container.
cmd: ["custom", "value"]
# if need be, this can grow to match the Docker Container spec, including
# 'env', 'volumes', etc. What I am proposing is |
I don't think @deads2k described something complicated and out of scope. I mentioned it before to @mfojtik, a job would be ideal for running my unit tests. Once indexed jobs are implemented, they may be even more valuable for running tests in parallel. I wouldn't promote the image in case of failed unit tests though. |
@Kargakis i'll relay you the same points i made to @deads2k (09:51:39 AM) bparees: "i want to run rake test as part of my build process" The fact that there might be a way to kick off a job based on a build completing does not solve those stories, and it still requires me to know how to go define yet another api object and tie it together with something. Build(success)->Deployment is a well defined flow in the product today that has a complete UX around it. Build->Job execution->Deployment is not and we are not signing up to solve that flow/UX with this card. |
Let's not plan to have two parallel way to describe running a command. I think @smarterclayton's comments at https://github.com/openshift/origin/pull/6715/files#r50259107 and https://github.com/openshift/origin/pull/6715/files#r50259303 stand |
I would guess some kind of a post-hook mechanism in builds that would push the image on successful completion. |
so i've got to have a buildconfig that explicitly starts a job, then waits for that job to complete before taking a next action? that's definitely complicated and out of scope relative to what we are doing here. also as i mentioned to @deads2k when we discussed this offline, you require additional quota if you're going to launch additional pods. This approach does not require additional quota from the user because we know we're running operations in serial and can use your quota serially. |
I would mostly agree except that we are not doing an Exec, therefore using the ExecAction api directly as @smarterclayton suggests is going to be misleading to users, particularly if they read the docs which state "The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work." Otherwise i'd agree that the ExecAction structure is fine. For 99% of our users, who will be using our s2i images, the existing entrypoint plus their "rake test" cmd[] will get exactly the behavior they want, in a simple way and we don't need to do anything else immediately. the next thing i'd expect us to need to add would be support for an explicit entrypoint, primarily to handle a scenario like "i'm doing a docker strategy build of a jenkins image. my dockerfile sets the ENTRYPOINT to "java -jar jenkins.jar", but i also want to run a post-build command to check some stuff in the image. which means i need to override the ENTRYPOINT so it doesn't just start jenkins and ignore my test commands". But even if we don't offer that, they can do what everyone else does and have their ENTRYPOINT be a shell script that, for no arguments, runs "java -jar jenkins.jar" and when invoked with arguments, runs a shell with those arguments. |
@bparees +1 |
Jobs are not the right solution. A build config is a logical chunk of
work, and build configs resulting in jobs being created might be a
future enhancement, but not today.
|
Entry point is not needed and would be an antipattern here (it's the
image run statement, not a hook). The exec action struct is not
critical, but the similarity in shape is. I would probably phrase it
as a subclass / variant of ExecAction.
|
IIUC what @smarterclayton is saying, I agree with the fact that setting an entrypoint is an antipattern, as I haven't seen a good reason not to document and always use "bash -c" yet.
@bparees IMHO this wouldn't be a problem if we had a fixed entrypoint. No conditionals, no multiple paths and possibilities to document, less confused users. If we can make it work for all images that contain bash, why limit ourselves to images that set the entrypoint to "something convenient"?!
I don't think we can/should force users to change the way they build their images / write their Dockerfiles just to satisfy our API to run unit tests.
@liggitt, the "future" I mentioned in my previous comment/proposal might simply never happen. I contrasted I want a PaaS that does what I need in a concise and easy to understand way, not one that has millions of features that I don't understand how to use. I'm going to update the PR based on the feedback so far. Will be making it look more like |
Maybe this was too broad a comment. What I had in mind specifically was YAGNI: |
Agree with @bparees, the |
For Docker builds, I expect the Dockerfile to set entrypoint and/or cmd such that running a container off the image will "run the app": start a database server, web server, etc. Our assumption was that people will not have to change their source repo (with their Dockerfile) in order to use this feature. Running a container with user-provided input as Docker's "cmd" and using whatever entrypoint is in the image does not solve this use case IHMO. |
i'm sticking with "if that becomes a common use case, we will add an entrypoint field to our ExecAction struct". This is how I see the scenarios:
|
Sweet, now we just need the CLI for setting it On Mon, Feb 15, 2016 at 2:31 PM, OpenShift Bot notifications@github.com
|
@smarterclayton https://trello.com/c/YpZLKSAX/790-3-add-ability-to-specify-pre-post-deployment-hooks-as-part-of-new-app-evg but likely won't drop this sprint (or for 3.2 in general) |
@jwforres FYI this has been merged, would be nice to have in the Web Console as well to make it easy to use. |
@rhcarvalho A lot of history here and in the PR. Any doc or summary of how it works? |
@spadgett docs will hopefully be ready today/tomorrow openshift/openshift-docs#1448 |
@spadgett apart from the docs link above, also please look at the screenshots in the original issue description. We want to make sure it's easy to specify and edit the command to be run, and that the working directory is clear. In case of using the script form, we should also make clear that it will be run using |
We want to support developers and teams using OpenShift to implement CI/CD workflows.
We want to close gaps and offer seamless transitions from simply running unit tests to more complex scenarios including integration with Jenkins.
This issue is meant to discuss how we add support for running unit tests as part of the build flow.
Requirements:
rake test
);.travis.yml
files are discarded);Related Trello cards:
Assumptions
docker run image@sha test command
);How existing CI tools work w.r.t. running unit tests
Drone.io
npm test
would be enough 👍export FOO=bar
in their list of commands to run?)Circle CI
Travis
https://docs.travis-ci.com/user/customizing-the-build/#Customizing-the-Build-Step
.travis.yml
file 👎script
is processed by a special bash function." (predictable experience) 👍The text was updated successfully, but these errors were encountered: