-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Allow users to specify what command to run tests #89
Conversation
@bparees @rhcarvalho @smarterclayton what do you think? I think this is a simple, allows customization of other env vars, can be used by any language (python, perl, etc). It does not require users to write their own custom assemble scripts, just tell us what command to run. (groups 1/2 are target for this feature) |
1f377be
to
8639cf4
Compare
I'm wary of introducing this standard vs solving the core problem of In effect you're introducing a post assemble hook but only for this Plus if I supply my own assemble script that doesn't include this logic, I I think it would be better to support this(a way to define a hook) directly Ben Parees | OpenShift
|
@bparees my worry is about making post-hooks too generic and explaining people "you have to use post-assemble hook" IOW write a shell script that we call during the build process, which we will have to document. If you provide your own assemble script, then you must know what you're doing. IOW you are replacing all core functionality we do provide in assemble script (including: installing dependencies, assets compilation, permissions fixing, etc...). This is just a WIP, I can do the same for all other images we have (just want to open discussion ;-) |
@bparees how better is having "post-assemble" in s2i vs. having Another problem I see is that having the post-assemble execute after assemble is done, might not always be what I want. Assemble might clean-up things I need to execute tests, it might mess-up permissions, etc. Also if the assemble script cleans cache/etc and then it is possible that "post-assemble" might again install some testing tools which will then require another cleanup after test run... |
Why couldn't s2i support an env variable to define the post hook command to The "execute assemble" logic for s2i would be slightly more complicated(run Ben Parees | OpenShift
|
At that point (when you need to run tests specifically before/after some And again, I'd rather not add more rules to "what assemble scripts are Ben Parees | OpenShift
|
Problem with running tests "after" assemble finishes (IMHO) is that you don't know in what state the "assemble" script left the container. Maybe you want to do a cleanup after you build the application, but you can't do that in assemble now, because you are going to execute tests in a separate step and things you cleaned up are required for testing... Yeah the old images will not support it unless we update the assemble scripts, which is a blocker for my approach ;-) So the approach could be:
I see two options this can work in S2I then:
I like 2) more because it run test in clean environment, does not leave testing deps, allows to set special env for testing, etc... |
I don't think this is specific to s2i. So whatever solution we have has to On Sun, Jan 3, 2016 at 2:30 PM, Michal Fojtik notifications@github.com
|
I'm also assuming that most tests will require some runtime environment. On Sun, Jan 3, 2016 at 5:01 PM, Clayton Coleman ccoleman@redhat.com wrote:
|
@smarterclayton for rails unit tests you most likely don't want to hit the database, which is not the case in integration (where you usually seed some local fs database like sqlite). For simple scenarios (simple unit testing):
For (group 2, unit + integration) we can support more advanced labels which in addition to existing unit tests allow us to generate testing environments to run integration tests (generate temporary project, deploy database/other services)... The question is, if the labels are the right way, or we want to augment the API. |
yeah we're really discussing a multitude of scenarios that require different solutions at this point. scenario 1 is "i want to run basic unit tests as part of my build within the container doing the build": scenario 2 (the one I think @smarterclayton is hinting towards) is "i want to perform some sort of testing using the image that just got built", for that you need an post-openshift-build hook (which could be as simple as a DeploymentConfig that's triggered by the newly built image, today, but could be something more explicitly defined on the BuildConfig and could fail the Build in the future). For that I agree, s2i, docker, and custom builds should all be supported, but that's not what I think we're trying to solve here and I think there's still value in implementing scenario (1) even if we also end up having (2) because (1) is going to be much faster/more efficient and probably simpler for a user to configure. scenario 3 is "i need a fully topology deployed including other images like a DB to run some tests", i don't know that we solve that with a first class api today, certainly I think we should be solving 1+2 first, anyway. |
Customizing assembly scripts is not really an option. It doesn't scale to I don't see how s2i and docker are fundamentally different - if you provide On Sun, Jan 3, 2016 at 9:08 PM, Ben Parees notifications@github.com wrote:
|
executing in your built image is scenario 2, for which i agree there's no difference and we'd support both. scenario 1 is executing tests as part of the build process (ie before the image gets committed. maybe you don't want to include unit test dependencies in your final image, making it impossible to run unit tests on the output image. or maybe you just don't want to commit/push the image if the unit tests failed). in that case there is a fundamental difference between an s2i and a docker build that makes it very different to implement and why i suggest only supporting scenario 1 for s2i builds. scenario 1 is really where this PR started off, so i was trying to focus on that. |
So then the follow up question for scenario 1 is - why would scenario 1 be Where would the command default to running from (what directory?) What I would assume in both of these scenarios that in order for the executed On Sun, Jan 3, 2016 at 9:56 PM, Ben Parees notifications@github.com wrote:
|
I suppose it doesn't have to be, you're just saying "implement scenario 2, but if they're doing an s2i build, optimize it by running tests in the build container instead of committing the container first and running the new image". Which we can start by just implementing the generic solution and then adding the s2i optimization (as discussed for scenario 1) later. btw, scenario 2 sure sounds like build-hooks to me, but we previously concluded we didn't want to implement build-hooks (https://trello.com/c/Sk3aSNxG/516-13-build-lifecycle-hooks). Do we think we now have a valid use case for a post-build-hook (similar to a post-deploy-hook except this would potentially fail the build if it fails, and prevent pushing the image). |
Yes, it's a possibility. It sounds similar, for sure, although I feel better with it backed with a How do we enable the easy case, make it possible for more complex On Sun, Jan 3, 2016 at 10:19 PM, Ben Parees notifications@github.com
|
yes the "what's my working directory, where is the source, what's my path, etc etc" question is a good one though with the amount of flexibility we've provided, i don't see an easy solution. yes our s2i images can follow some basic patterns, but it still seems likely the person writing the test cmd/script is going to have to know something about the image they just build/are building. anyway i'm going to suggest we tackle scenario 2/build-hooks in lieu of the "trigger a build when a build completes" card this sprint, so we can discuss it more during planning. |
@bparees we can assume that the working directory is the I think the common pattern (I found in blogs) is that people run their unit tests in Dockerfile. For us that means we don't need to do anything extra for those folks. IOW, image does not even build when the tests don't pass. Other approach can be "I already have image built and I want to execute tests before I deploy it, thus I want to start the image with different entrypoint before deploy and based on the results decide whether to deploy or not. Integration is tricky, see this article (some of the point that guys is making are valid here). In order to do what he is proposing as a "better way" we will need a way to spawn a temporary project, deploy all services there and make them available for the test. I don't think chaining builds the "naive way" will help to accomplish this, maybe we need something more robust, like a BuildPipeline object in API that tracks configured builds and kick a Job/Build/whatever when the configured builds finishes and all the images are pushed. Another way could be to track different ImageStreams (iow. when all this N ImageStreams get a new image pushed, trigger a build/kick a job. |
For s2i builds, we can use For Docker builds, running it in the current WORKDIR is probably the best bet. I think those are the dirs from where we can expect
Since I worked on both 516-build-lifecycle-hooks and 628-trigger-downstream-builds-after-build-completion, I'd like to drop some thoughts:
|
@mfojtik 👍 This was one of the points from our discussion late last year. |
@rhcarvalho I hate I hate having a configuration stored in SCM, because Jenkins also does not have it stored there (you define a shell steps as a part of the Jenkins jobs). However, Procfile can be substituted by either image LABEL or in OpenShift API (I would prefer LABELs are their are more generic and people can build tools around them). Requiring people to modify the original I agree that the common way to run tests in Dockerfile/Docker build strategies is to run in as part of your Dockerfile, but again, not everyone will do that (think about a Docker build that just copy the source into the image, but the source is already assembled, which in that case you don't control the Dockerfile (other than you can use Dockerfile strategy to hack it up ;-)) |
@smarterclayton @bparees thinking ahead we would probably want to be able to make this flow possible:
I think the end goal is to allow smooth transition between 1-2-3-4, so whatever we start in step 2) must allow to scale the number of tests and allow to incrementally add integration/e2e to my flow.
Later on when I want to move to Jenkins, we can easily use this metadata to construct a Jenkins jobs. |
Simple integration is best handled as a pod. Full e2e / complex test is On Mon, Jan 4, 2016 at 7:35 AM, Michal Fojtik notifications@github.com
|
As a point of discussion, all s2i images leaving their source in the On Mon, Jan 4, 2016 at 11:59 AM, Clayton Coleman ccoleman@redhat.com
|
closing as we have post-commit hook now. |
Verify all installed packages using rpm -V
This PR will allow to specify what command to run as a "test" command by setting the
RUN_TEST
variable.The command specified by this environment variable will be executed by the
assemble
script when installed all dependencies. If this variable is set and it fails (returns non-zero) the assemble will fail.For the S2I this variable can be specified by using the command line
-e RUN_TEST="bundle exec rake test"
in OpenShift you can specify this in theBuildConfig
as any other environment variable.In OpenShift, when the test fail, the image will not be produced.
This also allows to pass any additional variable to the build to customize the test run ($RUN_TEST pick up any variable specified for
assemble
).