Skip to content
This repository has been archived by the owner on Apr 4, 2023. It is now read-only.

feat(CI): Add Travis CI support #1127

Merged
merged 4 commits into from
Sep 13, 2021

Conversation

vibhutisawant
Copy link
Contributor

Signed-off-by: vibhutisawant Vibhuti.Sawant@ibm.com

What does this PR do?

Adds Travis CI support for Che-theia.

What issues does this PR fix or reference?

This refers to issue-19688. Adds Travis CI support to build multi-arch images.

Workflows migrated from GA to Travis CI:

  1. PR check and Happy path tests

    • In case of happy-path-tests, we have used the same actions as used in GA, i.e :
      • che-incubator/setup-minikube-action
      • che-incubator/che-deploy-action
      • che-incubator/happy-path-tests-action
    • As these actions couldn't be used as it is, we have cloned the same and compiled using node in happy-path-tests.sh script.
  2. Check a Theia branch

    • we can trigger a manual build by using the trigger build option from Travis UI. After selecting branch, we can provide a custom config as follows.
    env: 
      global:
      - THEIA_GITHUB_REPO=<Theia GitHub repository to build Che-Theia image>
      - THEIA_BRANCH=<Theia branch>
    
  3. Publish-built-in-extension-report

    • Requires setting up of cron job in Travis CI settings
  4. Build and publish next

    • This will build and publish multiarch image on push to master branch.
  5. Release

    • Requires custom config as follows:
    env: 
      global:
      - TAG=<version-to-be-released>
      - RECREATE_TAGS=false
      - PUSH_TO_NPMJS=true 
    

Note: TAG is a required field in the above config.

Check a Theia branch and Release workflow will require a manual build by using the trigger build option from Travis UI. After selecting a branch, we can provide a custom config.

Env variables to be set in Travis CI settings:
DOCKER_PASSWORD
DOCKER_USERNAME
QUAY_PASSWORD
QUAY_USERNAME
CHE_BOT_GITHUB_TOKEN
GITHUB_TOKEN
CHE_NPM_AUTH_TOKEN
MATTERMOST_WEBHOOK_URL
GITHUB_ACTOR

@che-bot
Copy link
Contributor

che-bot commented Jun 7, 2021

Can one of the admins verify this patch?

@vibhutisawant vibhutisawant changed the title Adds Travis CI support feat(CI): Add Travis CI support Jun 7, 2021
Copy link
Contributor

@benoitf benoitf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, I think we have too many duplicates
Also there is little value of using yet another CI than github actions.

Travis job should do the bare minimum: only build for arch that does not work with qemu (so no need to have linelint checks, etc)

Scripts are doing

docker pull quay.io/eclipse/che-theia-dev:next

this image is not multi-arch so I don't see the benefits

I don't see any reason of building amd64 image using travis. Using github action is working just fine.

happy path scripts should not be part of che-theia. in Che it's using separate github actions repositories

@ghatwala
Copy link

ghatwala commented Jun 7, 2021

FYI @nickboldt .

@nickboldt
Copy link
Contributor

@benoitf it would be more productive to ask questions than to overtly state "there is little value of using yet another CI than github actions".

The value is that GH actions w/ qemu+s390x do not work, and there's no plan in place in any known timeline to add support for that architecture, or to add the option for hosted runners on more arches.

Meanwhile Travis already has s390x hardware, provided by IBM. So if building Che on Z is something that Project Management and Product Management care about, then some solution must be found, even if it means a hybrid solution (multiple CI tools, perhaps doing builds on one and tests on another) or a migration solution (move everything to Travis to maintain a single pipeline for all supported arches).

@benoitf
Copy link
Contributor

benoitf commented Jun 7, 2021

The value is that GH actions w/ qemu+s390x do not work, and there's no plan in place in any known timeline to add support > for that architecture, or to add the option for hosted runners on more arches.

Meanwhile Travis already has s390x hardware, provided by IBM. So if building Che on Z is something that Project >Management and Product Management care about, then some solution must be found, even if it means a hybrid solution
(multiple CI tools, perhaps doing builds on one and tests on another) or a migration solution (move everything to Travis to
maintain a single pipeline for all supported arches).

I already commented that it should begin ONLY with arches that may not work with githubs action, not duplicating all github actions. (For example there is no value by this PR on checking linelint from source code with travis CI as it's already checked by github actions)

@benoitf
Copy link
Contributor

benoitf commented Jun 7, 2021

As TravisCI support triggered job so we could just call within github action the call to travis-ci to build the expected build on a given arch

@ericwill
Copy link
Contributor

The value is that GH actions w/ qemu+s390x do not work, and there's no plan in place in any known timeline to add support > for that architecture, or to add the option for hosted runners on more arches.

Meanwhile Travis already has s390x hardware, provided by IBM. So if building Che on Z is something that Project >Management and Product Management care about, then some solution must be found, even if it means a hybrid solution
(multiple CI tools, perhaps doing builds on one and tests on another) or a migration solution (move everything to Travis to
maintain a single pipeline for all supported arches).

I already commented that it should begin ONLY with arches that may not work with githubs action

Why? If Travis works I'd propose we move all building for all arches there, and leave GitHub action for small checks like linters.

What was discussed during the community call was that we could try building all arches on Travis (in parallel to GitHub actions) for a few weeks. During that time we can evaluate whether it's working smoothly. If there are no major issues we could disable GitHub action builders after that.

@benoitf
Copy link
Contributor

benoitf commented Jun 14, 2021

There is no design in Travis CI to share easily component across repositories like the actions in github.
If I look at happy path used in this PR, it's yet another copy of shell scripts instead of having the github actions defined at a common place and updated once for all.

I've seen that limits could be increased but most of the time the limits are 2 or 3 jobs at once per organization (vs 20 jobs in Github Actions)

Pre-installed tools are lagging behind github actions runner so it means more time to install stuff on each job (and then delay result) (for example minikube, etc.)

Also I think you need to be a committer to respin jobs vs just being the author of the PR on github action

@ericwill
Copy link
Contributor

There is no design in Travis CI to share easily component across repositories like the actions in github.
If I look at happy path used in this PR, it's yet another copy of shell scripts instead of having the github actions defined at a common place and updated once for all.

I've seen that limits could be increased but most of the time the limits are 2 or 3 jobs at once per organization (vs 20 jobs in Github Actions)

Pre-installed tools are lagging behind github actions runner so it means more time to install stuff on each job (and then delay result) (for example minikube, etc.)

Also I think you need to be a committer to respin jobs vs just being the author of the PR on github action

If this is all about happy path tests, can't we leave that on GitHub action and just use Travis for the build + publish?

@benoitf
Copy link
Contributor

benoitf commented Jun 14, 2021

It's about PR checks as well. If almost all jobs are queued, then you need to wait a very long time to be able to merge a PR.
And if at the end the job is just for a docker build, then it can be triggered/delegated from github action to travis-ci just for a given set or arches as well (it reduces the number of jobs and just use this platform to build a given image on a specific arch)

@ericwill
Copy link
Contributor

It's about PR checks as well. If almost all jobs are queued, then you need to wait a very long time to be able to merge a PR.
And if at the end the job is just for a docker build, then it can be triggered/delegated from github action to travis-ci just for a given set or arches as well (it reduces the number of jobs and just use this platform to build a given image on a specific arch)

I'm fine with using GitHub actions to trigger travis, then. However I'd like to avoid building and merging of arches separately -- can we build all arches on travis (triggered by GitHub actions)?

@benoitf
Copy link
Contributor

benoitf commented Jun 14, 2021

I'm fine with using GitHub actions to trigger travis, then. However I'd like to avoid building and merging of arches separately -- can we build all arches on travis (triggered by GitHub actions)?

Anyway without buildx it'll build separate images and then merge them together at the end (creating -arch images) and create a manifest at the end
https://github.com/eclipse-che/che-theia/pull/1127/files#diff-3ebe4f51d190a2dcd0e63852c15810979755be38330e0ff0558e400a35a1b887

so I don't see the issue if it's on the same host or not ?

@nickboldt
Copy link
Contributor

nickboldt commented Jun 15, 2021

If we do all builds in travis (vs. using GH to trigger Travis for the 1 or 2 arches we can't build w/ buildx), then we still need to know when the triggered travis processes are complete and the image:tag-s390x image exists, so it can be merged into the overall manifest with docker manifest create --amend...

So... how does GH know that the travis-delegated process has completed? Does it listen/wait for a return code? or do we have to loop & wait for the new image to exist, and then trigger the manifest creation with amend?

If delegation works then I could see a hybrid solution of (gh actions + buildx for some arches) + (delegation to travis for other arches) working for us... the only concern I would then have is that we would have a team of devs/qe/doc who have to know how to run builds using multiple build systems.

Using travis for everything would simplify the layers of tech we have to know/worry about... but I suppose since moving everything to travis isn't going to happen (eg., for linters) then we'll already be in a hybrid mode regardless of whether we use docker build in travis for all arches, or a mix of docker buildx in GHA and docker build in Travis for the qemu-unfriendly arches.

TL;DR, if we can delegate to Travis and easily get status when the s390x build completes, I'm okay with this hybrid build-and-assemble approach.

@codecov
Copy link

codecov bot commented Jun 15, 2021

Codecov Report

Merging #1127 (b336e87) into main (c299f59) will decrease coverage by 0.09%.
The diff coverage is 19.21%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1127      +/-   ##
==========================================
- Coverage   32.78%   32.69%   -0.10%     
==========================================
  Files         290      295       +5     
  Lines        9885    10005     +120     
  Branches     1457     1550      +93     
==========================================
+ Hits         3241     3271      +30     
+ Misses       6641     6625      -16     
- Partials        3      109     +106     
Impacted Files Coverage Δ
...theia-about/src/browser/about-che-theia-dialog.tsx 0.00% <0.00%> (ø)
...credentials/src/browser/che-credentials-service.ts 0.00% <0.00%> (ø)
...entials/src/browser/credentials-frontend-module.ts 0.00% <0.00%> (ø)
...eia-credentials/src/common/credentials-protocol.ts 0.00% <0.00%> (ø)
...eia-credentials/src/node/che-credentials-server.ts 0.00% <0.00%> (ø)
...s/src/node/che-theia-credentials-backend-module.ts 0.00% <0.00%> (ø)
...rowser/src/browser/che-mini-browser-environment.ts 0.00% <0.00%> (ø)
...he-server/src/node/che-server-http-service-impl.ts 0.00% <0.00%> (ø)
...-che-server/src/node/che-server-remote-api-impl.ts 38.88% <0.00%> (ø)
...browser/contribution/exec-terminal-contribution.ts 0.00% <0.00%> (ø)
... and 94 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8514e58...b336e87. Read the comment docs.

@benoitf
Copy link
Contributor

benoitf commented Jun 15, 2021

@nickboldt https://docs.travis-ci.com/api#builds
(you can get status of a given build triggered by https://docs.travis-ci.com/user/triggering-builds/)

@nickboldt
Copy link
Contributor

nickboldt commented Jun 15, 2021

So it's a poll-only approach? I was hoping for a push-notification approach, such that the travis process can say "I'm done!" and the GH process that invoked it can then continue its flow.

  • GH action docker buildx multiarch
  • GH action triggers travis build(s)
  • travis announces completion
  • GH action starts combining image tags into a single manifest

rather than

  • GH action docker buildx multiarch
  • GH action triggers travis build(s)
  • GH action polls every 2 mins to see if travis is done
  • GH action starts combining image tags into a single manifest

But I guess we already do that with Che overall release process, polling quay for the existance of new che image tags before moving to the next step in the pipeline (eg., don't start Che operator until DWCO image exists). We could either poll travis or just skopeo inspect quay for the tag(s) we expect, and wait until some timeout before failing the build.

@benoitf
Copy link
Contributor

benoitf commented Jun 15, 2021

you can trigger events from travis to github as well (as we do for che-release process)

@benoitf
Copy link
Contributor

benoitf commented Jun 15, 2021

I think it depends on how you want to see the result in PR checks / dependencies.
From technical point of views, triggers are possible in both ways and polling as well

@nickboldt
Copy link
Contributor

you can trigger events from travis to github as well (as we do for che-release process)

Ah, so the flow would be:

That's a nice way to split things up, as long as the secrets are in place at both ends for cross-infra triggering.

@nickboldt
Copy link
Contributor

Notes from today's call w/ Florent and other Che devs:

  • Travis limits us to 3 jobs in parallel (3 arches = 3 jobs); with GHA, our limit is 20 jobs in parallel
  • Eric and possibly others agree that doing PR checks on multiple arches is vital, so that's a lot of potential parallel builds (more than 3)
  • we might want to enable PR checks for x to be required and z/p to be optional, so as to not block merging commits
  • Travis is not currently enabled for the eclipse-che org in GH (only for eclipse) - need to get that enabled before these PRs can be validated. For that we need to open a BZ to talk to Denis R (Eclipse WM) to approve
  • But once it's approved, only need to pass tokens between travis and GHA to be able to trigger GHA->T or T->GHA

I've seen that limits could be increased but most of the time the limits are 2 or 3 jobs at once per organization (vs 20 jobs in Github Actions)

@cwsolee can you talk to your people at Travis to ask if we can get more capacity than 3 parallel jobs? If that's a hard limit, then we can't use travis for more than Z builds and it makes sense to keep everything else in the 20-build GHA pool (x, arm, power, etc.), and have a GH action delegate to Travis for only the s390x arch builds.

@cwsolee
Copy link

cwsolee commented Jun 16, 2021

Travis change our concurrency limit too, it's 20 now and Travis were quite willing to increase more when needed. With both time limit & concurrency limit increase, we haven't hit any problem since then.

@vibhutisawant
Copy link
Contributor Author

vibhutisawant commented Jun 23, 2021

Changes implemented:

  • TRAVIS_TAG is added such that each pushed tag is suffixed with "-travis". And tag for image on each arch will be in the format 7.x.x-travis-${arch}
  • To handle the build failures that may occur only on non-amd64 arch, Added allow_failures and fast_finish=true labels in case of PR check.
  • In release workflow, removed execution of make-release.sh as it will conflict with GHA release. Also, as tag is not pushed through Travis, removed "check existing tag" step.
  • Added ARCH variable in build.include to tag arch specific images
  • Added Travis API calls in all GHA workflows.

Made below changes to the workflows in Travis, as they were specific to amd64 and were not related to image build & push:

  • In PR check workflow: Removed linelint and code coverage report
  • Removed happy path tests workflow
  • In "Build and Push next" and "Release" workflow: Removed push to Npmjs functionality.
  • Removed publish built in extension report workflow
  • Removed Mattermost Notifications from release workflow as they get triggered via GHA.

PR check limitation on triggering PR-check jobs on Travis via API:

  1. The Travis API token will be available to PRs raised by users with write access to that particular repo.
  2. Changes made to the .travis.yml file in PRs will not be validated in PR check Travis jobs until the changes are committed into the repo.

While checking on the implementation, we found that the Travis request API doesn't support triggering builds on PR references.
(Ref: travis-ci/travis-ci#9907, https://travis-ci.community/t/trigger-build-on-pr-via-api/825)
As a workaround, added below specified commands in the install section of PR-check jobs in travis.yml. However, If a PR contains changes in the travis.yml file, those won't be validated.

git fetch origin +refs/pull/${PR_NUMBER}/merge
git checkout -qf FETCH_HEAD

@cwsolee
Copy link

cwsolee commented Jun 23, 2021

@nickboldt & @ericwill , We keep this PR as draft for u to take a look before we submit, let us know whether you've any comments. Comments from others are welcome too.

@nickboldt nickboldt requested a review from benoitf July 5, 2021 13:27
.travis.yml Outdated

env:
global:
- TAG=next
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the tag would conflict there with current one as we want to experiment for a while how it goes ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is that why we have the TRAVIS_TAG to append on for now, and later set to "" ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the tag would conflict there with current one as we want to experiment for a while how it goes ?

TAG won't conflict with the current next tag as while publishing the image we have suffixed the next tag with TRAVIS_TAG (i.e. next-travis).

is that why we have the TRAVIS_TAG to append on for now, and later set to "" ?

Yes. TRAVIS_TAG is introduced as Travis is experimental. Later this flag can be removed completely from all files.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't get why we do need an addition TRAVIS_TAG variable
if TAG is set to next-travis then image name will be suffixed by this tag so it should be enough to change the suffix

@@ -0,0 +1,36 @@
#!/bin/bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copyright header is missing but it also don't pass shellcheck.net

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move this folder to .ci/ folder (as we have a generic folder)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Will surely move it to .ci/ folder

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will add the Copyright header and make necessary changes. We can include all variables in double quotes to avoid shell check error except $AMEND.If we add $AMEND in quotes we get the below error

$ docker manifest create "${REGISTRY}/${image}:latest-${TRAVIS_TAG}" "${AMEND}"
invalid reference format

To avoid shellcheck in case of $AMEND, we can include # shellcheck disable=SC2086 in publish_multiarch.sh
Let us know your thoughts on the same

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you just use eval at the beginning of the line ?

$ eval docker manifest create "${REGISTRY}/${image}:latest-${TRAVIS_TAG}" "${AMEND}"

@nickboldt
Copy link
Contributor

Since we have a replacement PR now in play, I'm going to close this in favour of #1197

@nickboldt
Copy link
Contributor

reopening -- my changes in #1197 were not quite ideal, so we can keep using this one (or open a new one, whatever's better)

vibhutisawant added a commit to linux-on-ibm-z/che-theia that referenced this pull request Aug 24, 2021
vibhutisawant added a commit to linux-on-ibm-z/che-theia that referenced this pull request Aug 24, 2021
@vibhutisawant
Copy link
Contributor Author

@nickboldt I have updated the PR with changes from #1197.

@nickboldt
Copy link
Contributor

apologies, but ran out of time this week to review. I'm on PTO next week; perhaps @azatsarynnyy or @benoitf want to merge this? travis secret and branch rules are set up the same way for theia as for machine-exec, so once merged this should fire the same way as the machine-exec one.

Signed-off-by: vibhutisawant <Vibhuti.Sawant@ibm.com>
-Added SHA1_SUFFIX to build.include to differentiate between GA and Travis builds.
-Added copyright header
-Moved publish_multiarch.sh to .ci directory
-Removed $TRAVIS-TAG variable from build files
@nickboldt nickboldt merged commit 277e918 into eclipse-che:main Sep 13, 2021
@che-bot che-bot added this to the 7.37 milestone Sep 13, 2021
@nickboldt
Copy link
Contributor

Merged, but not firing in travis.

travis

@vibhutisawant
Copy link
Contributor Author

Hi @nickboldt
Travis build was not triggered due to wrong creds
https://github.com/eclipse-che/che-theia/pull/1216/checks?check_run_id=3587639501

@nickboldt
Copy link
Contributor

nickboldt commented Sep 13, 2021

I've uploaded my travis API token from https://app.travis-ci.com/account/preferences to the che-theia repo...

and we're in better shape.

But see news in #1216 -- we have to merge that change before we can get any passing docker builds.

@nickboldt
Copy link
Contributor

Good news: ubi and node builds now passing on 390x and ppc64le.
Bad news: alpine builds failing on 390x and ppc64le.

https://app.travis-ci.com/github/eclipse-che/che-theia/jobs/537152748
https://app.travis-ci.com/github/eclipse-che/che-theia/jobs/537152749

Looks like timeout/stalled process.

Do we actually need alpine builds for s390x and ppc64le? Why not just do ubi8 ones? @benoitf WDYT?

@benoitf
Copy link
Contributor

benoitf commented Sep 14, 2021

alpine images are multi-arch as well so I see no reason to not have them as well

@nickboldt
Copy link
Contributor

just seems redundant since we only care about ubi8 on power and z as a way of seeing problems coming to CRW sooner.

@nickboldt
Copy link
Contributor

nickboldt commented Sep 14, 2021


@eclipse-che/theia-assembly: $ theia build --mode production --config cdn/webpack.config.js --env cdn=./cdn.json --env monacopkg=@theia/monaco-editor-core@0.23.0 && yarn run override-vs-loader
@eclipse-che/theia-assembly: Failed to resolve module: filenamify
@eclipse-che/theia-assembly: Error: webpack exited with an unexpected signal: SIGKILL.\n    at ChildProcess.<anonymous> (/home/theia-dev/theia-source-code/dev-packages/application-manager/lib/application-process.js:60:28)\n    at ChildProcess.emit (events.js:314:20)\n    at maybeClose (internal/child_process.js:1022:16)\n    at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)
@eclipse-che/theia-assembly: info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
@eclipse-che/theia-assembly: error Command failed with exit code 1.
lerna ERR! execute callback with error
lerna ERR! Error: Command failed: yarn run build
lerna ERR! Failed to resolve module: filenamify
lerna ERR! Error: webpack exited with an unexpected signal: SIGKILL.
lerna ERR!     at ChildProcess.<anonymous> (/home/theia-dev/theia-source-code/dev-packages/application-manager/lib/application-process.js:60:28)
lerna ERR!     at ChildProcess.emit (events.js:314:20)
lerna ERR!     at maybeClose (internal/child_process.js:1022:16)
lerna ERR!     at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)
lerna ERR! error Command failed with exit code 1.
lerna ERR! 
lerna ERR! $ theia build --mode production --config cdn/webpack.config.js --env cdn=./cdn.json --env monacopkg=@theia/monaco-editor-core@0.23.0 && yarn run override-vs-loader
lerna ERR! info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
lerna ERR! 
lerna ERR!     at /home/theia-dev/theia-source-code/node_modules/lerna/node_modules/execa/index.js:236:11
lerna ERR!     at runMicrotasks (<anonymous>)
Error: Command failed: yarn run build
Failed to resolve module: filenamify
Error: webpack exited with an unexpected signal: SIGKILL.
    at ChildProcess.<anonymous> (/home/theia-dev/theia-source-code/dev-packages/application-manager/lib/application-process.js:60:28)
    at ChildProcess.emit (events.js:314:20)
    at maybeClose (internal/child_process.js:1022:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)
error Command failed with exit code 1.
$ theia build --mode production --config cdn/webpack.config.js --env cdn=./cdn.json --env monacopkg=@theia/monaco-editor-core@0.23.0 && yarn run override-vs-loader
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    at /home/theia-dev/theia-source-code/node_modules/lerna/node_modules/execa/index.js:236:11
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:97:5) {
  code: 1,
  killed: false,
  stdout: '$ theia build --mode production --config cdn/webpack.config.js --env cdn=./cdn.json --env monacopkg=@theia/monaco-editor-core@0.23.0 && yarn run override-vs-loader\n' +
    'info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\n',
  stderr: 'Failed to resolve module: filenamify\n' +
    'Error: webpack exited with an unexpected signal: SIGKILL.\n' +
    '    at ChildProcess.<anonymous> (/home/theia-dev/theia-source-code/dev-packages/application-manager/lib/application-process.js:60:28)\n' +
    '    at ChildProcess.emit (events.js:314:20)\n' +
    '    at maybeClose (internal/child_process.js:1022:16)\n' +
    '    at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)\n' +
    'error Command failed with exit code 1.\n',
  failed: true,
  signal: null,
  cmd: 'yarn run build',
  timedOut: false,
  exitCode: 1
}
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
The command '/bin/sh -c if [ -z $GITHUB_TOKEN ]; then unset GITHUB_TOKEN; fi &&     yarn ${YARN_FLAGS}' returned a non-zero code: 1

-- https://app.travis-ci.com/github/eclipse-che/che-theia/jobs/537152749

Alpine build in travis on s390x failed 3 times now. Can we skip it? disable it? fix it? throw rocks at it? :D

@nickboldt
Copy link
Contributor

Travis builds for s390x which were triggered on worker node (travis-ci-production-1-worker4-com) failed with filenamify error. We observed that this worker node has got 4gb memory config. Once the worker config is updated we will post an update here.

Having increased the worker node's memory, travis was successful:

https://app.travis-ci.com/github/eclipse-che/che-theia/builds/237558344

@azatsarynnyy
Copy link
Member

Hello,
https://www.eclipse.org/lists/eclipse.org-committers/msg01321.html
I'm not sure if this ^^ means it won't be possible to use Travis CI by Che-Theia, but FYI at least.

@nickboldt
Copy link
Contributor

nickboldt commented Sep 21, 2021

@cwsolee said:

In terms of credit, we don't need to worry because IBM & Travis has a very tight relationship, I'm confident that we could add credit if needed.
If Eclipse drop Travis, does it mean that Eclipse Che can't use Travis?
Or Eclipse Che can still use but just not under Eclipse org?

Given repos under eclipse-che are managed by Eclipse (eg., eclipse-che/che#19391 -> https://bugs.eclipse.org/bugs/show_bug.cgi?id=571958 ), and that Eclipse Foundation will stop supporting Travis CI on GitHub organizations it manages... [and] won't configure any new repository / organization and [they] plan on removing the TravisCI GitHub app from all organizations [they] manage on October 20th....

I'd say that likely means we won't be able to get more eclipse-che/* projects into Travis under the Eclipse Foundation mandate.

But... hey, maybe you can reach out to the Eclipse Webmaster? or @cwsolee can talk to the Travis folks to find out if "add credit if needed" will actually be easy?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants