Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Allow users to specify what command to run tests #89

Closed
wants to merge 2 commits into from

Conversation

mfojtik
Copy link
Contributor

@mfojtik mfojtik commented Jan 3, 2016

This PR will allow to specify what command to run as a "test" command by setting the RUN_TEST variable.

The command specified by this environment variable will be executed by the assemble script when installed all dependencies. If this variable is set and it fails (returns non-zero) the assemble will fail.

For the S2I this variable can be specified by using the command line -e RUN_TEST="bundle exec rake test" in OpenShift you can specify this in the BuildConfig as any other environment variable.
In OpenShift, when the test fail, the image will not be produced.

This also allows to pass any additional variable to the build to customize the test run ($RUN_TEST pick up any variable specified for assemble).

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 3, 2016

@bparees @rhcarvalho @smarterclayton what do you think? I think this is a simple, allows customization of other env vars, can be used by any language (python, perl, etc). It does not require users to write their own custom assemble scripts, just tell us what command to run. (groups 1/2 are target for this feature)

@mfojtik mfojtik force-pushed the run_test branch 2 times, most recently from 1f377be to 8639cf4 Compare January 3, 2016 12:38
@bparees
Copy link
Collaborator

bparees commented Jan 3, 2016

I'm wary of introducing this standard vs solving the core problem of
allowing pre/post assemble scripts to be provided.

In effect you're introducing a post assemble hook but only for this
specific image and calling it a test hook when really it could be used for
anything.

Plus if I supply my own assemble script that doesn't include this logic, I
lose this function.

I think it would be better to support this(a way to define a hook) directly
in s2i. And call it post assemble instead of run test.

Ben Parees | OpenShift
On Jan 3, 2016 7:33 AM, "Michal Fojtik" notifications@github.com wrote:

@bparees https://github.com/bparees @rhcarvalho
https://github.com/rhcarvalho @smarterclayton
https://github.com/smarterclayton what do you think? I think this is a
simple, allows customization of other env vars, can be used by any language
(python, perl, etc). It does not require users to write their own custom
assemble scripts, just tell us what command to run.


Reply to this email directly or view it on GitHub
#89 (comment).

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 3, 2016

@bparees my worry is about making post-hooks too generic and explaining people "you have to use post-assemble hook" IOW write a shell script that we call during the build process, which we will have to document.
The core (and more explicit) problem IMHO is allowing users to execute unit tests. What other common use case(s) do we have for post-assemble?

If you provide your own assemble script, then you must know what you're doing. IOW you are replacing all core functionality we do provide in assemble script (including: installing dependencies, assets compilation, permissions fixing, etc...).
In that case, you can rename RUN_TEST to whatever you want and execute it as part of your assemble process.

This is just a WIP, I can do the same for all other images we have (just want to open discussion ;-)

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 3, 2016

@bparees how better is having "post-assemble" in s2i vs. having RUN_TEST env var? To me, it is one extra script I will have to maintain in my GIT repo, called "post-assemble" which does not say what the script is really doing (execute tests).

Another problem I see is that having the post-assemble execute after assemble is done, might not always be what I want. Assemble might clean-up things I need to execute tests, it might mess-up permissions, etc. Also if the assemble script cleans cache/etc and then it is possible that "post-assemble" might again install some testing tools which will then require another cleanup after test run...

@bparees
Copy link
Collaborator

bparees commented Jan 3, 2016

Why couldn't s2i support an env variable to define the post hook command to
run?

The "execute assemble" logic for s2i would be slightly more complicated(run
the assemble script and then run the command defined in the env var), but
it would then work just like this, but for all images and all assemble
scripts.

Ben Parees | OpenShift
On Jan 3, 2016 12:16 PM, "Michal Fojtik" notifications@github.com wrote:

@bparees https://github.com/bparees my worry is about making post-hooks
too generic and explaining people "you have to use post-assemble hook" IOW
write a shell script that we call during the build process, which we will
have to document.
The core (and more explicit) problem IMHO is allowing users to execute
unit tests. What other common use case(s) do we have for post-assemble?

If you provide your own assemble script, then you must know what you're
doing. IOW you are replacing all core functionality we do provide in
assemble script (including: installing dependencies, assets compilation,
permissions fixing, etc...).
In that case, you can rename RUN_TEST to whatever you want and execute it
as part of your assemble process.

This is just a WIP, I can do the same for all other images we have (just
want to open discussion ;-)


Reply to this email directly or view it on GitHub
#89 (comment).

@bparees
Copy link
Collaborator

bparees commented Jan 3, 2016

At that point (when you need to run tests specifically before/after some
assemble step like cleanup) you're into custom assemble scripts imho. If
you have those sort of problems, how do you know we injected the RUN_TEST
invocation in the right place for a given use case anyway?

And again, I'd rather not add more rules to "what assemble scripts are
responsible for doing", or make each image more unique in terms of what it
does/doesn't support. (yes we can add this to all the s2i images we
control, but we'll also have to get the Middleware team to update theirs,
and third party images still won't do it, old versions of images won't
support it, etc. I think the "built into the platform universally" approach
is more maintainable and consumable)

Ben Parees | OpenShift
On Jan 3, 2016 12:26 PM, "Michal Fojtik" notifications@github.com wrote:

@bparees https://github.com/bparees how better is having
"post-assemble" in s2i vs. having RUN_TEST env var? To me, it is one
extra script I will have to maintain in my GIT repo, called "post-assemble"
which does not say what the script is really doing (execute tests).

Another problem I see is that having the post-assemble execute after
assemble is done, might not always be what I want. Assemble might clean-up
things I need to execute tests, it might mess-up permissions, etc. Also if
the assemble script cleans cache/etc and then it is possible that
"post-assemble" might again install some testing tools which will then
require another cleanup after test run...


Reply to this email directly or view it on GitHub
#89 (comment).

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 3, 2016

Problem with running tests "after" assemble finishes (IMHO) is that you don't know in what state the "assemble" script left the container. Maybe you want to do a cleanup after you build the application, but you can't do that in assemble now, because you are going to execute tests in a separate step and things you cleaned up are required for testing...

Yeah the old images will not support it unless we update the assemble scripts, which is a blocker for my approach ;-)

So the approach could be:

  1. Add --test-cmd="bundle exec rake test" option (and alternatively --test-env=FOO=bar option to define environment passed to a testing container).
  2. Support S2I_TEST_CMD (and S2I_TEST_ENV) which will work the same as in this PR and we don't need to add more API fields into S2I BuildConfig.

I see two options this can work in S2I then:

  1. Start a container, copy source, run the assemble script and then run the $TEST_CMD in the same container and commit the image (and set the labels based on the testing result).
  2. Start a container, copy source, run the assemble script, commit the image (as we do now). Then start a new container and run the test in that container and record the results in origin image labels.

I like 2) more because it run test in clean environment, does not leave testing deps, allows to set special env for testing, etc...

@smarterclayton
Copy link

I don't think this is specific to s2i. So whatever solution we have has to
work for Docker builds and any other build type we create in the future.
It also has to work for extended builds.

On Sun, Jan 3, 2016 at 2:30 PM, Michal Fojtik notifications@github.com
wrote:

Problem with running tests "after" assemble finishes (IMHO) is that you
don't know in what state the "assemble" script left the container. Maybe
you want to do a cleanup after you build the application, but you can't do
that in assemble now, because you are going to execute tests in a separate
step and things you cleaned up are required for testing...

Yeah the old images will not support it unless we update the assemble
scripts, which is a blocker for my approach ;-)

So to do this in S2I we can:

  1. Add --test-cmd="bundle exec rake test" option (and alternatively
    --test-env=FOO=bar option to define environment passed to a testing
    container).
  2. Support S2I_TEST_CMD (and S2I_TEST_ENV) which will work the same as in
    this PR and we don't need to add more API fields into S2I BuildConfig.

I see two options this can work in S2I then:

  1. Start a container, copy source, run the assemble script and then run
    the $TEST_CMD in the same container and commit the image (and set the
    labels based on the testing result).
  2. Start a container, copy source, run the assemble script, commit the
    image (as we do now). Then start a new container and run the test in that
    container and record the results in origin image labels.

I like 2) more because it run test in clean environment, does not leave
testing deps, allows to set special env for testing, etc...


Reply to this email directly or view it on GitHub
#89 (comment).

@smarterclayton
Copy link

I'm also assuming that most tests will require some runtime environment.
For things like Java, you will need Tomcat. For Rails, some integration
tests and e2e tests will require a started server with a Db. You also need
a DB for most rails unit tests.

On Sun, Jan 3, 2016 at 5:01 PM, Clayton Coleman ccoleman@redhat.com wrote:

I don't think this is specific to s2i. So whatever solution we have has
to work for Docker builds and any other build type we create in the
future. It also has to work for extended builds.

On Sun, Jan 3, 2016 at 2:30 PM, Michal Fojtik notifications@github.com
wrote:

Problem with running tests "after" assemble finishes (IMHO) is that you
don't know in what state the "assemble" script left the container. Maybe
you want to do a cleanup after you build the application, but you can't do
that in assemble now, because you are going to execute tests in a separate
step and things you cleaned up are required for testing...

Yeah the old images will not support it unless we update the assemble
scripts, which is a blocker for my approach ;-)

So to do this in S2I we can:

  1. Add --test-cmd="bundle exec rake test" option (and alternatively
    --test-env=FOO=bar option to define environment passed to a testing
    container).
  2. Support S2I_TEST_CMD (and S2I_TEST_ENV) which will work the same as
    in this PR and we don't need to add more API fields into S2I BuildConfig.

I see two options this can work in S2I then:

  1. Start a container, copy source, run the assemble script and then run
    the $TEST_CMD in the same container and commit the image (and set the
    labels based on the testing result).
  2. Start a container, copy source, run the assemble script, commit the
    image (as we do now). Then start a new container and run the test in that
    container and record the results in origin image labels.

I like 2) more because it run test in clean environment, does not leave
testing deps, allows to set special env for testing, etc...


Reply to this email directly or view it on GitHub
#89 (comment).

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 3, 2016

@smarterclayton for rails unit tests you most likely don't want to hit the database, which is not the case in integration (where you usually seed some local fs database like sqlite).
For small projects you most likely just want to run some basic unit tests as part of your build process. When things gets more complex, you probably still want to run unit tests for each component and then an integration suite across all the components (same for e2e).
Do we want to separate these cases or want something universal enough to be able to run all of them? If so, then Build itself might not be enough to accomplish this exercise and we will have to generate more objects to construct a testing environment where the tests can run.

For simple scenarios (simple unit testing):

  1. The only common ground for all build scenarios is the "builder" itself (s2i, docker, dockerfile, custom, ...)
  2. To run tests after we build the image, we will have to first produce the image and then run it with user defined entrypoint (rake test) and with user defined environment (test env vars). I think this is similar to $ docker run fooapp bundle exec rake test
  3. For that, we can add first class API fields to BuildConfig, that will accomplish this or we can search for output image labels that define testing configuration for particular image. S2I can produce these labels based on the environment, docker and dockerfile strategies can generate them in builder.
  4. If we choose the labels, then S2I scripts can remain unchanged, just set the labels users define and invoke test after image is build, then record the result (as a label/annotation?)
  5. If the builder sets the "test result" label to ImageStreamTag, we can configure Deployment to deploy only when tests are passed.

For (group 2, unit + integration) we can support more advanced labels which in addition to existing unit tests allow us to generate testing environments to run integration tests (generate temporary project, deploy database/other services)...

The question is, if the labels are the right way, or we want to augment the API.

@bparees
Copy link
Collaborator

bparees commented Jan 4, 2016

yeah we're really discussing a multitude of scenarios that require different solutions at this point.

scenario 1 is "i want to run basic unit tests as part of my build within the container doing the build":
For running basic unit tests, imho, we should only be solving that at the s2i api level. If you want to run unit tests as part of your docker or custom image build, then you modify your Dockerfile or custom image to do that. There's no sane way for us to dynamically inject unit test execution into a user's Dockerfile or custom image. For this scenario @mfojtik has raised valid questions of "when in the build process should tests be executed?" Since always running the user's test script/commands after the build is done might be undesirable but I'm still inclined to prefer that somewhat inflexible approach over having to customize every assemble script and potentially still not put the test invocation in the right place depending on what the user is trying to do.

scenario 2 (the one I think @smarterclayton is hinting towards) is "i want to perform some sort of testing using the image that just got built", for that you need an post-openshift-build hook (which could be as simple as a DeploymentConfig that's triggered by the newly built image, today, but could be something more explicitly defined on the BuildConfig and could fail the Build in the future). For that I agree, s2i, docker, and custom builds should all be supported, but that's not what I think we're trying to solve here and I think there's still value in implementing scenario (1) even if we also end up having (2) because (1) is going to be much faster/more efficient and probably simpler for a user to configure.

scenario 3 is "i need a fully topology deployed including other images like a DB to run some tests", i don't know that we solve that with a first class api today, certainly I think we should be solving 1+2 first, anyway.

@smarterclayton
Copy link

Customizing assembly scripts is not really an option. It doesn't scale to
the product - instead, it's something that we could in the future
standardize for an image author, but doesn't solve the problem across the
board. You can also do that in a docker image. So let's just say that it
is possible, but not a solution for the general problem (certainly not
something who isn't the image author can solve).

I don't see how s2i and docker are fundamentally different - if you provide
the command to run and execute your tests in your built image, what does
s2i vs. docker have to do with it?

On Sun, Jan 3, 2016 at 9:08 PM, Ben Parees notifications@github.com wrote:

yeah we're really discussing a multitude of scenarios that require
different solutions at this point.

scenario 1 is "i want to run basic unit tests as part of my build within
the container doing the build":
For running basic unit tests, imho, we should only be solving that at the
s2i api level. If you want to run unit tests as part of your docker or
custom image build, then you modify your Dockerfile or custom image to do
that. There's no sane way for us to dynamically inject unit test execution
into a user's Dockerfile or custom image. For this scenario @mfojtik
https://github.com/mfojtik has raised valid questions of "when in the
build process should tests be executed?" Since always running the user's
test script/commands after the build is done might be undesirable but I'm
still inclined to prefer that somewhat inflexible approach over having to
customize every assemble script and potentially still not put the test
invocation in the right place depending on what the user is trying to do.

scenario 2 (the one I think @smarterclayton
https://github.com/smarterclayton is hinting towards) is "i want to
perform some sort of testing using the image that just got built", for that
you need an post-openshift-build hook (which could be as simple as a
DeploymentConfig that's triggered by the newly built image, today, but
could be something more explicitly defined on the BuildConfig and could
fail the Build in the future). For that I agree, s2i, docker, and custom
builds should all be supported, but that's not what I think we're trying to
solve here and I think there's still value in implementing scenario (1)
even if we also end up having (2) because (1) is going to be much
faster/more efficient and probably simpler for a user to configure.

scenario 3 is "i need a fully topology deployed including other images
like a DB to run some tests", i don't know that we solve that with a first
class api today, certainly I think we should be solving 1+2 first, anyway.


Reply to this email directly or view it on GitHub
#89 (comment).

@bparees
Copy link
Collaborator

bparees commented Jan 4, 2016

I don't see how s2i and docker are fundamentally different - if you provide the command to run and execute your tests in your built image, what does s2i vs. docker have to do with it?

executing in your built image is scenario 2, for which i agree there's no difference and we'd support both.

scenario 1 is executing tests as part of the build process (ie before the image gets committed. maybe you don't want to include unit test dependencies in your final image, making it impossible to run unit tests on the output image. or maybe you just don't want to commit/push the image if the unit tests failed). in that case there is a fundamental difference between an s2i and a docker build that makes it very different to implement and why i suggest only supporting scenario 1 for s2i builds.

scenario 1 is really where this PR started off, so i was trying to focus on that.

@smarterclayton
Copy link

So then the follow up question for scenario 1 is - why would scenario 1 be
specified (to a consumer of openshift) differently than scenario 2? Does
the user actually care, as long as they can easily run their tests?

Where would the command default to running from (what directory?) What
substitutions would I allow (env or argument)? If every S2I image can run
from a different location, does that mean the location has to be
defaulted? How does the user know what the source directory is? The class
path?

I would assume in both of these scenarios that in order for the executed
script to be useful, the user has to be in the right directory, and have
some understanding of the environment of the image. Having to guess at
that (or worse, mandate the image follow some specific pattern) will make
scenario 2 more frustration for end users.

On Sun, Jan 3, 2016 at 9:56 PM, Ben Parees notifications@github.com wrote:

I don't see how s2i and docker are fundamentally different - if you
provide the command to run and execute your tests in your built image, what
does s2i vs. docker have to do with it?

executing in your built image is scenario 2, for which i agree there's no
difference and we'd support both.

scenario 1 is executing tests as part of the build process (ie before the
image gets committed. maybe you don't want to include unit test
dependencies in your final image, making it impossible to run unit tests on
the output image. or maybe you just don't want to commit/push the image if
the unit tests failed). in that case there is a fundamental difference
between an s2i and a docker build that makes it very different to implement
and why i suggest only supporting scenario 1 for s2i builds.

scenario 1 is really where this PR started off, so i was trying to focus
on that.


Reply to this email directly or view it on GitHub
#89 (comment).

@bparees
Copy link
Collaborator

bparees commented Jan 4, 2016

So then the follow up question for scenario 1 is - why would scenario 1 be specified (to a consumer of openshift) differently than scenario 2? Does the user actually care, as long as they can easily run their tests?

I suppose it doesn't have to be, you're just saying "implement scenario 2, but if they're doing an s2i build, optimize it by running tests in the build container instead of committing the container first and running the new image". Which we can start by just implementing the generic solution and then adding the s2i optimization (as discussed for scenario 1) later.

btw, scenario 2 sure sounds like build-hooks to me, but we previously concluded we didn't want to implement build-hooks (https://trello.com/c/Sk3aSNxG/516-13-build-lifecycle-hooks). Do we think we now have a valid use case for a post-build-hook (similar to a post-deploy-hook except this would potentially fail the build if it fails, and prevent pushing the image).

@smarterclayton
Copy link

Yes, it's a possibility.

It sounds similar, for sure, although I feel better with it backed with a
real practical use case. Part of my concern that I'm trying to convey is
that it needs to be a) easy for someone to setup ("bundle exec rake test")
and b) able to deal with the fact that no image is the same internally
(i.e. as you noted a docker build X and an S2I build Y may not have
anything in common internally, or even two implementations of Tomcat builds
could be vastly different in terms of directories). If you can't just say
"bundle exec rake test" we've failed, and if we can't handle "well, I need
this one env var set and access to the volume" we've probably also failed.

How do we enable the easy case, make it possible for more complex
execution, allow image authors to follow some guidelines (to make it
easier), and end up with a solution that actually solves a real use case?

On Sun, Jan 3, 2016 at 10:19 PM, Ben Parees notifications@github.com
wrote:

So then the follow up question for scenario 1 is - why would scenario 1 be
specified (to a consumer of openshift) differently than scenario 2? Does
the user actually care, as long as they can easily run their tests?

I suppose it doesn't have to be, you're just saying "implement scenario 2,
but if they're doing an s2i build, optimize it by running tests in the
build container instead of committing the container first and running the
new image". Which we can start by just implementing the generic solution
and then adding the s2i optimization (as discussed for scenario 1) later.

btw, scenario 2 sure sounds like build-hooks to me, but we previously
concluded we didn't want to implement build-hooks (
https://trello.com/c/Sk3aSNxG/516-13-build-lifecycle-hooks). Do we think
we now have a valid use case for a post-build-hook (similar to a
post-deploy-hook except this would potentially fail the build if it fails,
and prevent pushing the image).


Reply to this email directly or view it on GitHub
#89 (comment).

@bparees
Copy link
Collaborator

bparees commented Jan 4, 2016

yes the "what's my working directory, where is the source, what's my path, etc etc" question is a good one though with the amount of flexibility we've provided, i don't see an easy solution.

yes our s2i images can follow some basic patterns, but it still seems likely the person writing the test cmd/script is going to have to know something about the image they just build/are building.

anyway i'm going to suggest we tackle scenario 2/build-hooks in lieu of the "trigger a build when a build completes" card this sprint, so we can discuss it more during planning.

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 4, 2016

@bparees we can assume that the working directory is the WORKDIR, other informations has to be provided somehow, where it is a LABEL on image or BuildConfig fields or environment variables.

I think the common pattern (I found in blogs) is that people run their unit tests in Dockerfile. For us that means we don't need to do anything extra for those folks. IOW, image does not even build when the tests don't pass. Other approach can be "I already have image built and I want to execute tests before I deploy it, thus I want to start the image with different entrypoint before deploy and based on the results decide whether to deploy or not.

Integration is tricky, see this article (some of the point that guys is making are valid here). In order to do what he is proposing as a "better way" we will need a way to spawn a temporary project, deploy all services there and make them available for the test.
The hard part here is how we know what tests to execute, how to configure the services (they might run in different mode/with fake data/etc.) and how we make sure we rebuild all components before we kick the test (think about application consisting from N different micro-services coming from one code base).

I don't think chaining builds the "naive way" will help to accomplish this, maybe we need something more robust, like a BuildPipeline object in API that tracks configured builds and kick a Job/Build/whatever when the configured builds finishes and all the images are pushed.

Another way could be to track different ImageStreams (iow. when all this N ImageStreams get a new image pushed, trigger a build/kick a job.

@rhcarvalho
Copy link
Contributor

the "what's my working directory, where is the source, what's my path, etc etc" question

For s2i builds, we can use $HOME as that's where your code goes (and takes a "contextDir" into account).

For Docker builds, running it in the current WORKDIR is probably the best bet.

I think those are the dirs from where we can expect bundle exec rake test to work.

i'm going to suggest we tackle scenario 2/build-hooks in lieu of the "trigger a build when a build completes" card this sprint

Since I worked on both 516-build-lifecycle-hooks and 628-trigger-downstream-builds-after-build-completion, I'd like to drop some thoughts:

  • In the scope of the first card we drafted a proposal ([WIP] Build hooks proposal openshift/origin#3736) that eventually did not get merged. We had in mind mimicking functionality from Deployment Configs, and we concluded that adding the build hooks in that fashion would not improve the user experience in any of the use cases we had in mind, including running tests after build and triggering downstream builds (the conclusions are available in the proposal draft in that PR).

  • The second card was one of a few follow-ups to the first, as we decided to split each use case and investigate specific solutions to improve the usability. We started with "start downstream builds", but we had in mind there could be multiple post build actions, including running tests.

  • We clearly don't have in place the pieces that would make the "start downstream builds" worth it: we need a way to demo a complete and reasonable flow like building artifacts in a first Build, and later having a downstream build that consumes the artifacts from the exact build that triggered it.

  • Scenario 1, Docker builds, I think the easiest way to run unit tests is to put it in your Dockerfile. That's the most obvious solution and works in OpenShift or anywhere else where you can do a docker build. For S2I, I like being able to be descriptive and to keep configuration versioned with the source code. I generally like the "Procfile" approach, thought we might want to adapt it to what we already have, which are scripts under ./.sti. In that direction, we could invest in making it easier to consistently refer to the original scripts, so my assemble script could be like:

    #!/bin/bash
    set -e
    /path/to/original/assemble
    
    bundle exec rake test

    Not as cool as just saying "bundle exec rake test" somewhere, though.
    In the Procfile-way, we'd say something like this:

    test: bundle exec rake test
    run: bundle exec serve-my-app

@rhcarvalho
Copy link
Contributor

we will need a way to spawn a temporary project, deploy all services there and make them available for the test.

@mfojtik 👍

This was one of the points from our discussion late last year.

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 4, 2016

@rhcarvalho I hate $HOME because it can change under your hands and it depends on the UID and the /etc/passwd entry. The WORKDIR is also set for all S2I image AFAIK, so I will stick with that. I think it is safe to assume (at least for S2I images) that the current dir (workdir) is the directory we fire-up the test from. We can support (we should) an image LABEL that explicitly set the "application root directory" in case you are using Docker image as a build which does not have WORKDIR set or it is set to somewhere else than your application root.

I hate having a configuration stored in SCM, because Jenkins also does not have it stored there (you define a shell steps as a part of the Jenkins jobs). However, Procfile can be substituted by either image LABEL or in OpenShift API (I would prefer LABELs are their are more generic and people can build tools around them). Requiring people to modify the original assemble script is wrong way (see the comments before) and it means we failed to provide generic solution.

I agree that the common way to run tests in Dockerfile/Docker build strategies is to run in as part of your Dockerfile, but again, not everyone will do that (think about a Docker build that just copy the source into the image, but the source is already assembled, which in that case you don't control the Dockerfile (other than you can use Dockerfile strategy to hack it up ;-))
So we will need a way in our builder (docker builder) to say what we should execute when the image is created (a label? a piece of configuration in API? env variable?)

@mfojtik
Copy link
Contributor Author

mfojtik commented Jan 4, 2016

@smarterclayton @bparees thinking ahead we would probably want to be able to make this flow possible:

  1. I start with a simple application
  2. I grow and hire more developers, so I need a way to add unit tests to my application to not break each-other
  3. Now my application grow and I have to add integration and e2e tests in addition to unit tests
  4. Now I want to have control and record of what is tested and how, so I add CI (Jenkins/CircleCI/etc) into the flow

I think the end goal is to allow smooth transition between 1-2-3-4, so whatever we start in step 2) must allow to scale the number of tests and allow to incrementally add integration/e2e to my flow.
So I think having a additive configuration method like LABEL might work here:

  1. LABEL io.openshift.unit-test-cmd "bundle exec rake test"
  2. LABEL io.openshift.integration-test-cmd "bundle exec rake spec"
  3. LABEL io.openshift.e2e-test-cmd "bundle exec cucumber"

Later on when I want to move to Jenkins, we can easily use this metadata to construct a Jenkins jobs.

@smarterclayton
Copy link

Integration is tricky, see this article
http://blog.venanti.us/testing-a-microservice-architecture (some of the
point that guys is making are valid here). In order to do what he is
proposing as a "better way" we will need a way to spawn a temporary
project, deploy all services there and make them available for the test. The
hard part here is how we know what tests to execute, how to configure the
services (they might run in different mode/with fake data/etc.) and how we
make sure we rebuild all components before we kick the test (think about
application consisting from N different micro-services coming from one code
base).

Simple integration is best handled as a pod. Full e2e / complex test is
probably best handled with a template into another project. However, I'm
not convinced that is necessary (yet).

On Mon, Jan 4, 2016 at 7:35 AM, Michal Fojtik notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton @bparees
https://github.com/bparees thinking ahead we would probably want to be
able to make this flow possible:

  1. I start with a simple application
  2. I grow and hire more developers, so I need a way to add unit tests to
    my application to not break each-other
  3. Now my application grow and I have to add integration and e2e tests in
    addition to unit tests
  4. Now I want to have control and record of what is tested and how, so I
    add CI (Jenkins/CircleCI/etc) into the flow

I think the end goal is to allow smooth transition between 1-2-3-4, so
whatever we start in step 2) must allow to scale the number of tests and
allow to incrementally add integration/e2e to my flow.
So I think having a additive configuration method like LABEL might work
here:

  1. LABEL io.openshift.unit-test-cmd "bundle exec rake test"
  2. LABEL io.openshift.integration-test-cmd "bundle exec rake spec"
  3. LABEL io.openshift.e2e-test-cmd "bundle exec cucumber"

Later on when I want to move to Jenkins, we can easily use this metadata
to construct a Jenkins jobs.


Reply to this email directly or view it on GitHub
#89 (comment).

@smarterclayton
Copy link

As a point of discussion, all s2i images leaving their source in the
committed image by default is acceptable. Someone who cares about size of
built image is either using extended/binary builds (so that they don't have
to install maven) or is willing to customize a docker image. Someone doing
that already has a complex pipeline and can add source wherever.

On Mon, Jan 4, 2016 at 11:59 AM, Clayton Coleman ccoleman@redhat.com
wrote:

Integration is tricky, see this article
http://blog.venanti.us/testing-a-microservice-architecture (some of the
point that guys is making are valid here). In order to do what he is
proposing as a "better way" we will need a way to spawn a temporary
project, deploy all services there and make them available for the test. The
hard part here is how we know what tests to execute, how to configure the
services (they might run in different mode/with fake data/etc.) and how we
make sure we rebuild all components before we kick the test (think about
application consisting from N different micro-services coming from one code
base).

Simple integration is best handled as a pod. Full e2e / complex test is
probably best handled with a template into another project. However, I'm
not convinced that is necessary (yet).

On Mon, Jan 4, 2016 at 7:35 AM, Michal Fojtik notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton @bparees
https://github.com/bparees thinking ahead we would probably want to be
able to make this flow possible:

  1. I start with a simple application
  2. I grow and hire more developers, so I need a way to add unit tests to
    my application to not break each-other
  3. Now my application grow and I have to add integration and e2e tests in
    addition to unit tests
  4. Now I want to have control and record of what is tested and how, so I
    add CI (Jenkins/CircleCI/etc) into the flow

I think the end goal is to allow smooth transition between 1-2-3-4, so
whatever we start in step 2) must allow to scale the number of tests and
allow to incrementally add integration/e2e to my flow.
So I think having a additive configuration method like LABEL might work
here:

  1. LABEL io.openshift.unit-test-cmd "bundle exec rake test"
  2. LABEL io.openshift.integration-test-cmd "bundle exec rake spec"
  3. LABEL io.openshift.e2e-test-cmd "bundle exec cucumber"

Later on when I want to move to Jenkins, we can easily use this metadata
to construct a Jenkins jobs.


Reply to this email directly or view it on GitHub
#89 (comment).

@mfojtik
Copy link
Contributor Author

mfojtik commented Apr 4, 2016

closing as we have post-commit hook now.

@mfojtik mfojtik closed this Apr 4, 2016
hhorak pushed a commit to hhorak/s2i-ruby-container that referenced this pull request Aug 13, 2020
Verify all installed packages using rpm -V
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants