Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build Stages: Flexible and practical Continuous Delivery pipelines #11

Closed
joshk opened this issue Mar 30, 2017 · 218 comments · Fixed by travis-ci/travis-hub#125
Closed

Build Stages: Flexible and practical Continuous Delivery pipelines #11

joshk opened this issue Mar 30, 2017 · 218 comments · Fixed by travis-ci/travis-hub#125

Comments

@joshk
Copy link
Contributor

joshk commented Mar 30, 2017

From simple deployment pipelines, to complex testing groups, the world is your CI and CD oyster with Build Stages.

Build Stages allows you and your team to compose groups of Jobs which are only started once the previous Stage has finished.

You can mix Linux and Mac VMs together, or split them into different Stages. Since each Stage is configurable, there are endless Build pipeline possibilities!

This feature will be available for general beta testing soon... watch this space 😄

We love to hear feedback, it's the best way for us to improve and shape Travis CI. Please leave all thoughts/comments/ideas related to this feature here.

Happy Testing!

@joshk joshk self-assigned this Mar 30, 2017
@bsipocz
Copy link

bsipocz commented Mar 30, 2017

@joshk - I have a somewhat related question. Do you consider introducing a new tagging system that would trigger only part of the build pipeline? The usecase I have in mind is very simple: run unit tests, if they pass run the docs build. But for pure docs PRs, no need to do the first step so a [docs only] or [skip test1] in the commit message would jump to the step in the build process. A group of jobs are then tagged in this example either as test1 or docs.

@joshk
Copy link
Contributor Author

joshk commented Mar 30, 2017

Hi @bsipocz

Hmmmm, that is an interesting idea. Not at the moment, but it might be something for us to consider later. I think this might be a bit of an edge use case, although we might be surprised by what people need and want. 😄

@larsxschneider
Copy link

That sounds interesting! The following feature would be very useful to me:

My build has a number of jobs (7 total). I want to run all of them on my release branch. However, I only want to run a subset of them on my feature branches to speed up the Travis run and to save resources. According to support (thanks @joepvd !) this is not possible right now but might be in the future 😉

Would that be useful for other people, too?

@joshk
Copy link
Contributor Author

joshk commented Mar 31, 2017

@larsxschneider I love this idea, and definitely think it is a valid use case!

@fabriziocucci
Copy link

...it was about time! 😅

All kidding aside, in my opinion this is THE missing feature of Travis which will also tempt all Jenkins lovers to give Travis another try.

I would strongly suggest to have a look at the great job that the GitLab guys have done with pipelines and environments (no, I'm not part of the GitLab team).

@JoshCheek
Copy link

Hi, it's looking good so far! In the example, the unit tests are bash scripts. For us, though, the unit tests are in multiple services, each with their own GH repo, and they currently trigger CI builds. The issue we have with this is that it doesn't report CI failures back to the GH issue that triggered it. I'm thinking about replacing the CI step with a pipeline repo, but I still don't see how to get around this issue.

So lets say I set it up like this:

  • the repo service-1 has unit tests
  • the repo service-2 has unit tests
  • the repo integration has integration tests
  • the repo deploy has deploy scripts
  • the repo pipeline uses this feature to test service-1, then service-2, then integration then run the scripts to deploy

Then when someone submits a PR in service-1, that PR should cause travis to run pipeline's travis build instead of its own. But the interface from the PR should feel the same. Meaning it should report failures back to the PR that triggered it. Metaphorically, I'm thinking about it like a file system soft-link, or a C++ reference, where service-1's .travis.yml has some configuration to say "I don't have my own CI, instead go run pipeline's with a parameter telling it to build my repo against this commit"

I'm expecting that this is how almost everyone is going to want to use it. Multiple repos that act as event triggers for the pipeline, and the pipeline should report back to them with its result. Eg even if you're not deploying, once you split your project into multiple repos, to use the pipeline to coordinate across those repos, they'll see their unit tests as just the first stage in their pipeline repo.


Also, shout out to y'all for working on this, I'm a huge Travis fan, and was worried I'd have to find a different CI or write a lot of wrapper code in order to get this kind of feature. Also, thx to @BanzaiMan for pointing me at it ❤️

@siebertm
Copy link

siebertm commented May 3, 2017

Nice one! Is it possible to somehow "name" the jobs so that the job's intent is also revealed in the UI?

@MariadeAnton
Copy link
Contributor

Hi everyone!

Build Stages are now in Public Beta 🚀 https://blog.travis-ci.com/2017-05-11-introducing-build-stages

Looking forward to hearing what you all think!

@pimterry
Copy link

This looks really nice! The one thing I'd love though is conditional stages.

The same on structure as deploy would work fine. In our case, I'd like to have a deploy stage that runs for tagged commits (using a specific regex tag format), but I don't want the stage to appear at all on builds otherwise, since none of them should be deploying. I think something like this also solves quite a few of the use cases above (docs-only builds, unit/integration tests stages depending on the branch, etc).

@hawkrives
Copy link

hawkrives commented May 11, 2017

First: Wow! This looks really really cool.

With that said, I think I found a bug? Maybe?

My build stages aren't respected correctly if I specify a ruby version at the top level (config, build log), only if I specify it inside the job itself (config, build log).

That is to say,

language: ruby
rvm: '2.4'
cache: 
  bundler: true

jobs:
  include:
    - stage: prepare cache
      script: true
    - stage: test
      script: bundle show
    - stage: test
      script: bundle show
    - stage: test
      script: bundle show

gives me four "Test" jobs and one "Prepare Cache" job, in that order, while inlining the rvm key as below gives me the proper one "Prepare Cache" and three "Test" jobs.

language: ruby
cache: 
  bundler: true

jobs:
  include:
    - stage: prepare cache
      script: true
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'

I would have expected them to be equivalent?

@bmuschko
Copy link

Deployment is a central piece of every Continuous Delivery pipeline. Some organization or projects do not want to go with the Continuous Deployment model as it doesn't fit their workflow. That means they'd rather decide when to deploy on demand instead of deploying with every change. Are you planning to support the definition of a stage that can be triggered manually through the UI?

@soaxelbrooke
Copy link

soaxelbrooke commented May 12, 2017

Python docker test/build/deploy fails for unknown reasons when converted to build stages. Should a separate issue be created?

When debugged and each step run in the tmate shell, everything works as expected.

@svenfuchs
Copy link

Thanks for the feedback, everyone! We are collecting all your input, and we will conduct another planning phase after a certain amount of time, and evaluate your ideas, concerns, and feature requests. So, your input is very valuable to us.

@pimterry This makes sense. The on condition logic is currently only evaluated only after the job already has been scheduled for execution, and it only applies to the deploy phase that is part of the job. We'd want to make this a first-class condition on the job itself. You're right, this also would make sense in other scenarios, too. I'll add this to our list.

@hawkrives I see how this is confusing, and looks as if both configs should be equivalent. The reason why they're not is that rvm is a "matrix expansion key" (see our docs here), and it will generate one job per value (in your case just one). The jobs defined in jobs.include will be added to that set of jobs. This makes a lot more sense in other scenarios, e.g. when you have a huge matrix, and then want to run a single deploy job after it, e.g. https://docs.travis-ci.com/user/build-stages/matrix-expansion/. We have evaluating this more on our list, as we've gotten this same feedback from others, too, and we'll look into how to make this less confusing.

@bmuschko Yes, that is one of the additions on our list. In fact, it was mentioned in the original, very first whiteboarding session, and it has had an impact on the specific config format that we have chosen for build stages.

@soaxelbrooke Yes, it would make sense to either open a separate issue, or email support@travis-ci.org with details.

Again, thank you all!

@popravich
Copy link

Hi guys! Great feature!

I've just started to playing with it and have got an issue with build matrix:
I have a several Python version in my build matrix, so generating multiple test-stage jobs.
Adding another stage without explicitly set python key generates a single job with all python version values collapsed into single value.
Here's the build — https://travis-ci.org/aio-libs/aioredis/builds/231530766

@BanzaiMan
Copy link

@popravich Hello. For individual issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new, or send email to support@travis-ci.com. Thanks.

@jeffbyrnes
Copy link

I’d love it if the test job/stage was not always the first one, and that the main script was not included if it is skip.

@BanzaiMan
Copy link

@jeffbyrnes If you do not want test to be the first stage, please override the stage names.

the main script was not included if it is skip.

Do you mean that you don't want to see the message at all?

@cspotcode
Copy link

I'm seeing the same behavior as @hawkrives: there is no way to declare stages that execute before the build matrix. Any top-level keys that trigger any sort of build matrix (rvm, env, node_js, etc) even if it's a single-job matrix, cause the test stage to be declared first, so it always executes first. Any test jobs declared within jobs: include: are merely appended to the build matrix jobs.

The only solution I've found is to avoid the build matrix, manually enumerating each job of the matrix within my jobs.include section. This is fine -- I have total control -- but it means for big matrices I might write a script to generate my .travis.yml. The documentation could also describe this solution for newcomers.


Is there a way to share build cache between jobs with different environments? For example, can I populate the yarn cache in a stage using node_js: 7, and use that cache in both of my "test" jobs: both node_js: 7 and node_js: 6?

@glensc
Copy link

glensc commented May 12, 2017

imho the syntax is rather complex and hard to grasp comparing to gitlab-ci. i already had serious headache trying to understand how to use matrix. and to make even worse stages and matrix can be also combined!

travis syntax endorses that all my scripts get nested several levels deep of indentation.

for example, let's take deploy-github-releases example:

  • in my previous .travis.yml deploy: was root level
  • with stages it's third level

perhaps some syntax addon to write section names from root level instead typing in the actual script?

jobs:
  include:
    - script: &.test1
    - script: &.test2
    - stage: GitHub Release
      script: echo "Deploying to npm ..."
      deploy: &.deploy

.test1: |
  echo "Running unit tests (1)"

.test2: |
  echo "Running unit tests (2)"

.deploy:
        provider: releases
        api_key: $GITHUB_OAUTH_TOKEN
        skip_cleanup: true
        on:
          tags: true

ps: [deploy-github-releases] lacks echo keyword in script examples

@BanzaiMan
Copy link

@cspotcode Could you elaborate on how you would like to mix the build matrix and the stages, where you might want to execute some of it before the matrix?

As for the cache question, what is "an environment" when you say:

Is there a way to share build cache between jobs with different environments?

The answer to your question, I believe, is "no", because node_js: 6 and node_js: 7 jobs will have different cache names, as explained in https://docs.travis-ci.com/user/caching/#Caches-and-build-matrices. They could contain binary-incompatible files and may not work in general. If you want to share things between them, an external storage (such as S3 or GCS) would have to be configured.

@cspotcode
Copy link

cspotcode commented May 12, 2017

@BanzaiMan
To mix the matrix with stages, I am imagining a situation like this example: https://docs.travis-ci.com/user/build-stages/share-docker-image/
However, in that example, all jobs in the "test" stage are declared explicitly within jobs.include. Suppose a developer wanted to use the build matrix to declare those "test" jobs but wanted the "build docker image" stage to execute first. Will that be possible, or will we be required to avoid the build matrix like in the linked example?
I now see that this is the same as what @jeffbyrnes asked about here: #11 (comment)

"An environment" means all of the things that make the cache names different: node version, ruby version, environment variables, OS, etc. I agree that sharing cache between node 6 and 7 may not work in general, which is why the default behavior is to have different caches. I'm asking if there is a way to override that behavior in situations where a developer knows that sharing cache will safely afford them a performance benefit without causing problems.

EDIT fixed a typo

@BanzaiMan
Copy link

@flovilmart As mentioned before, for particular use case issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new. Thanks.

@webknjaz
Copy link

Hi,

Is there a list of YAML keys I have to remove from config (and move to jobs.include), so that travis would detect it as pipeline-enabled?
I had to move them one by one until only notifications and cache left at the root level of nesting.

@BanzaiMan
Copy link

BanzaiMan commented May 13, 2017

@webknjaz I am not sure if such a list should exist. This feature is meant to be compatible with the existing configuration, and if you had to do extra work, then there might be a bug. In travis-ci/travis-ci#7754 (comment), I identified matrix.fast_finish to be a potential culprit. Did you have this? If not, where can we see how you worked through the troubles?

@ljharb
Copy link

ljharb commented May 13, 2017

Is there any way I can make certain parts of the matrix be in one stage, and others in another? Kind of like how allow_failure can be used with env vars to target multiple disparate jobs.

@rmehner
Copy link

rmehner commented Jul 7, 2017

Hey there,

as tweeted here it would be super nice, if it would be possible to skip stages on certain branches. Something like this:

jobs:
  include:
    - stage: test
      rvm: 2.3.1
      script: bundle exec rspec
    - stage: test
      rvm: 2.4.1
      script: bundle exec rspec
    - stage: deploy
      rvm: 2.3.1
      branches:
        - master
        - production

My use case is, that I want to test if something breaks in the latest version of Ruby, while still keeping my main test suite in line with the version that is run in the respective production environment and only deploy with that version. However, the deploy stage takes a while to run & install and I don't need it to be run on any other branch than master or production.

I know there are workarounds, but I'd like the stages features to support that natively (I only want to deploy if all test stages are green)

@keradus
Copy link

keradus commented Jul 7, 2017

functionality already requested and approved, yet no ETA:
#11 (comment)

@peshay
Copy link

peshay commented Jul 9, 2017

I have an issue with my travis syntax when I try to integrate that new feature

language: python
python:
- '2.7'
- '3.3'
- '3.4'
- '3.5'
- '3.6'
- 3.7-dev
- nightly
install:
- pip install -r requirements.txt
- python setup.py -q install

jobs:
  include:
    - stage: Tests
      script: nosetests -v --with-coverage
      after_success: codecov
    - stage: Releases
      before_deploy: tar -czf tpmstore-$TRAVIS_TAG.tar.gz tpmstore/*.py
      deploy:
        provider: releases
        api_key:
          secure: <long string>
        file: tpmstore-$TRAVIS_TAG.tar.gz
      on:
        repo: peshay/tpmstore
        branch: master
        tags: true
    - 
      deploy:
        - provider: pypi
          distributions: sdist
          user: peshay
          password:
            secure: <long string>
          on:
            branch: master
            tags: true
            condition: $TRAVIS_PYTHON_VERSION = "2.7"
        - provider: pypi
          distributions: sdist
          server: https://test.pypi.org/legacy/
          user: peshay
          password:
            secure: <long string>
          on:
            branch: master
            tags: false
            condition: $TRAVIS_PYTHON_VERSION = "2.7"

@keradus
Copy link

keradus commented Jul 9, 2017

perhaps describing the issue you are facing would help

@peshay
Copy link

peshay commented Jul 9, 2017

travis linter simply fails, I don‘t see why.
unexpected key jobs, dropping

@maciejtreder
Copy link

Sharing files via stages, definetely should be done differently then via external systems. In the gitlab-ci it is done really simply, by 'artifacts' property in yml. Here should be same.

@BanzaiMan
Copy link

The linter is sadly out of date at the moment. Many of the recent keys are not recognized. We have plans to improve this aspect of our services, but it will be a little while.

Sharing storage has been raised as a missing feature many times before, and we recognize that it is critical.

@peshay
Copy link

peshay commented Jul 10, 2017

@BanzaiMan Thanks it works now. First I had real syntax issues and I tried with the linter to fix it but then got stuck at that key jobs. But it works like a charm now. :)

@asifdxtreme
Copy link

This seems to be a very interesting feature.
I have already started using this feature in my projects and I am happy to see all my Build, UT and IT in different stages in Travis CI.

It will be very nice if I can see the status of these individual stages in my github pull request page.

@weitjong
Copy link

Just have time to test the new feature and loving it so far! Kudos to Travis team.

From my tests I have found an undocumented feature. I could name the stage from the build matrix by adding a top-level stage in my .travis.yml, like so:

language: cpp
compiler:
  - gcc
  - clang
dist: trusty
sudo: false
addons: {apt: {packages: &default_packages [doxygen, graphviz]}}
env:
  global:
    - numjobs=4
  matrix:
    - FOO=foo1
    - FOO=foo2
    - FOO=barNone
stage: build test
before_script: echo "before script"
script: echo $FOO
after_script: echo "after script"
matrix:
  fast_finish: true
  exclude:
    - env: FOO=barNone
  include:
    - stage: more build test
      env: FOO=bar
      addons: {apt: {packages: [*default_packages, rpm]}}
    - stage: deploy
      env: FOO=foobar
      addons: null
      before_script: null
      script: echo $FOO
      after_script: null

@mpkorstanje
Copy link

mpkorstanje commented Jul 16, 2017

By canceling all but one job in the pipeline I've gotten a job stuck in the yellow "created" state.

https://travis-ci.org/cucumber/cucumber-jvm/builds/254134986?utm_source=github_status

Steps to reproduce:

  1. Start the job.
  2. Cancel all deploy and all but one test job.
  3. Fail the test job.

I would expect the build to be either marked canceled or failed.

@Griffon26
Copy link

I've put the coverity_scan plugin in a stage. See here: https://github.com/Griffon26/vhde/blob/master/.travis.yml#L61

Should that work? I assumed so, because I can also override the global apt plugin settings in a stage.

When the job runs the coverity scan is skipped and I see no logging whatsoever related to the coverity scan plugin: https://travis-ci.org/Griffon26/vhde/jobs/254260753

@svenfuchs
Copy link

@shepmaster Thanks for the additional input. I've added your example to our internal tracking issue.

@roperto Thanks for the suggestion. It seems to me that your usecase would be covered by adding more filtering/condition capabilities, which is something that is on our list.

@colby-swandale Sorry for the late response. If you still have this issue/question it might be best to get in touch with support via email support@travis-ci.com

@EmilDafinov @alorma @seivan Thanks for the suggestion. Yes, allowing a name and/or description for jobs has been suggested a few times, and it's on our list of things to consider.

@webknjaz Thanks for the suggestion on improving the message for allowed failures on our UI. I'll forward this to our UI/web team.

@ghedamat @webknjaz Yes, you are right. Jobs in later stages end up in a canceled state, and restarting one job currently does not touch the state of any other jobs. That is sort of intended, even though in the context of build stages it might seem to make sense. I'll add reconsidering this behaviour to our list, but for now it seems unlikely for us to change this.

@aontas @ELD It sounds like your setup should be very possible. If you're still having this issue could you please get in touch with support via email? support@travis-ci.com

@ELD Thanks for the suggestion of a .travis.yml web tool/editor. We have that on our list.

@23Skidoo Thanks for the suggestions. There are no plans to introduce a more complicated pipeline setup at this time. However, we're collecting usecases and thoughts to be considered in a future iteration. If you could outline your case more that would be valuable input. Making cache slugs more customizable is on our list.

@envygeeks Thanks for the suggestions. Listing stage names, and specifying their order is one improvement pretty high on our list. I'm not sure I understand what you mean by "global matrixes were respected when it comes to env" ... could you elaborate? Also, the example linked by @skeggse should work. If you still have this issue, could you get in touch with support via email? support@travis-ci.com

@timofurrer Development on this has not started, yet, no, sorry. We're still in the planning phase for the most part. Also, thanks for the pointer about the missing message. I'll forward that to our UI/web team.

@SkySkimmer Thanks for the detailed writeup on your case. Do I understand this correctly that you'd need either more flexibility in using our built-in cache feature (i.e. customize cache slugs according to your needs), or need a different way of sharing build artifacts between jobs?

@jsmaniac Thanks for documenting this. I'll open a ticket to consider adding this to our documentation.

@pawamoy Thanks for the report, and for the suggestion. I'll open a ticket about the bug, and add your suggestion for a separate color to our list of things to consider.

@bsideup Thanks for the suggestion. This has come up a few times, and it's high on our list of things to consider for the next iteration.

@leighmcculloch @asifdxtreme Thanks for the suggestion. I understand integrating with the GitHub UI in more detail would be desirable. However, the GitHub commit status API has a couple issues for us, and we're essentially hitting their limits, and as far as I understand they're trying to figure out improvements. So this might not be the best time for us to make such a huge change. I'll still add your suggestion to our list of things to re-evaluate this later.

@szpak Thanks for the report about the stage not being canceled. From your report it seems this clearly is a bug, I suspect a race'y condition between the component that cancels, and the component that schedules jobs. I'll open a ticket for this.

@stianst Thanks for the feedback! The ability to share artifacts between jobs on different stages more easily is definitely on our list. The ability to name jobs also is on the list of things to be considered.

@bkimminich Thanks for the suggestion. I can see how this makes sense in your usecase. I've added it to our list, and we'll consider this in a future iteration. I'm not sure about the outcome of that evaluation, yet, but I'm certain we won't prioritize it before work on a new travis.yml parser (and format specification) has been completed, so in any case this might not happen for a while.

@maciejtreder Improved support for sharing artifacts between stages is pretty high on our list.

@thedrow If I understand your case correctly then, yes, at the moment you'd need to specify these jobs individually. If you still have this issue, could you please email support? support@travis-ci.com

@weitjong Hah, this is interesting, thanks for documenting this here. I would not have expected this to work, but I can now see how this works accidentally. I'd recommend not relying on it too much for the time being, even though it might actually make sense to add it as an official feature.

@mpkorstanje Thanks a lot for the report. This clearly is a bug. I'll open a ticket for this and look into it.

@Griffon26 From what I can tell I would guess this should work, but it might be something specific to coverity_scan. I'd recommend getting in touch with support via email support@travis-ci.com.

@SkySkimmer
Copy link

SkySkimmer commented Jul 19, 2017

@SkySkimmer Thanks for the detailed writeup on your case. Do I understand this correctly that you'd need either more flexibility in using our built-in cache feature (i.e. customize cache slugs according to your needs), or need a different way of sharing build artifacts between jobs?

To use the cache to share artifacts I would need a way to share cache between arbitrary jobs (maybe this is what you mean by customize cache slugs) and also some idea of what happens when the cache is modified by parallel jobs.
(unforeseen issues might come up after that of course)

@dg
Copy link

dg commented Jul 24, 2017

@keradus
Copy link

keradus commented Jul 24, 2017

as you included only 2 jobs to autogenerated matrix and let unexisting job to fail

@webknjaz
Copy link

webknjaz commented Jul 25, 2017

@dg allow_failures let's you declare that if the running job (defined via include or global matrix) matches some of the features listed in allow_failures it shouldn't fail the whole build.
Just move its content to jobs.include and add just stage: Code Coverage to allow_failures, so that it would match this rule by stage name. Also it is common to use language version or env var for this purpose.

@jank
Copy link

jank commented Jul 26, 2017

Update: I revoke my question for now - just saw that I had a great mixup with matrix, python versions, and jobs. Still, it is quite confusing how these features can be combined.

This is a great feature. However, I am struggling to get it to work. The way jobs and matrix expand into stages is a mystery to me. The documentation does not really help to this end.

I am able to produce this build: https://travis-ci.org/blue-yonder/sqlalchemy_exasol/builds/257765684
with this travis config: https://github.com/blue-yonder/sqlalchemy_exasol/blob/rootless_travis/.travis.yml

What I want are stages with two build jobs each (one Exasol5 one Exasol6 - see matrix), and one Python version used in each stage (either 2 or 3).

Ignore for a second that the build is failing - I have a version in the commit history where the builds worked. First, I'd like to see the stages well aligned.

Where am I holding it wrong?

@dholdren
Copy link

dholdren commented Jul 26, 2017

I'd like it if later stages ran once failed jobs from earlier stages are restarted and pass.

e.g. If I have two stages, "Test" and "Deploy", and "Test" is composed of 4 jobs, and one of those 4 fail, I can restart it. But if it then passes, the "Deploy" stage doesn't automatically run, and I have to manually start it.

@BanzaiMan
Copy link

I'm locking this issue for the time being. Most of the bug reports and feature requests are now understood.

Thanks.

@travis-ci travis-ci locked and limited conversation to collaborators Jul 26, 2017
@svenfuchs
Copy link

Accidentally closed this by referring to one comment here in a pull request. Thanks @ljharb for the pointer! ❤️

In this case, however, since @BanzaiMan already locked it, i guess it's fine. My tentative plan is to open a new "beta feature feedback issue" once I ship iteration 2 (should happen soonish), and then also include something like an FAQ at the very top (so we don't get all the already answered questions all over).

@svenfuchs
Copy link

svenfuchs commented Sep 13, 2017

We have shipped "Build Stages Iteration 2", fixing several bugs, and introducing "Build Stages Order" as well as "Conditional Builds, Stages, and Jobs".

See our blog post for details: https://blog.travis-ci.com/2017-09-12-build-stages-order-and-conditions

We have also opened a new beta feature feedback issue here: #28

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.