Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Want ability to run MSI builds on ci.jenkins.io #2745

Closed
basil opened this issue Jan 17, 2022 · 8 comments
Closed

Want ability to run MSI builds on ci.jenkins.io #2745

basil opened this issue Jan 17, 2022 · 8 comments

Comments

@basil
Copy link
Collaborator

basil commented Jan 17, 2022

Service

AWS

Summary

Context

The jenkinsci/packaging repository accepts pull requests, but the Jenkinsfile does not have build coverage for the MSI. As a result the MSI is at high risk of regression (jenkinsci/packaging#235). To solve this problem, I have proposed jenkinsci/packaging#238 to add build coverage for the MSI in the Jenkinsfile. That PR copies the pod configuration from the production release process.

Problem

When trying to duplicate the release pod configuration on ci.jenkins.io PR builds, I cannot schedule Windows .NET containers. It appears that the Kubernetes cluster for ci.jenkins.io supports Linux container but not Windows containers.

Note

I selected the AWS service because I think that is where the Kubernetes cluster is hosted, but I am not sure. My apologies if I selected the wrong service.

Reproduction steps

Steps to Reproduce

Create a Jenkinsfile in a repository that you have write access to with the node/pod configuration from jenkinsci/packaging#238. Then run a build with that Jenkinsfile.

Expected Results

A Windows container is spun up and becomes available as an agent.

Actual results

The Windows container cannot be scheduled:

21:30:27  Connecting to https://api.github.com to check permissions of obtain list of basil for jenkinsci/packaging
21:30:28  Obtained PodTemplates.d/package-windows.yaml from 737a197dd328bdb5cde5ab96565d6d471f865b34+0c5809022cb2f3b1ffb2ad1661f73a652a2de08d (3ca78582bbe6bc703e373953b7bd50fa64fc7843)
21:30:28  [Pipeline] podTemplate
21:30:28  [Pipeline] {
21:30:28  [Pipeline] node
21:30:38  Created Pod: cik8s jenkins-agents/packaging-packaging-pr-238-4-pxnf0-7f420-wj6br
21:30:39  [Warning][jenkins-agents/packaging-packaging-pr-238-4-pxnf0-7f420-wj6br][FailedScheduling] 0/4 nodes are available: 4 node(s) didn't match Pod's node affinity.
21:30:43  Still waiting to schedule task
21:30:43  ‘packaging-packaging-pr-238-4-pxnf0-7f420-wj6br’ is offline
21:30:49  [Normal][jenkins-agents/packaging-packaging-pr-238-4-pxnf0-7f420-wj6br][NotTriggerScaleUp] pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity/selector
@basil basil added the triage Incoming issues that need review label Jan 17, 2022
@dduportal
Copy link
Contributor

Hello @basil , would a VM agent be ok for this? ci.jenkins.io is currently able to run Windows machines (Server LTS 2019 updated at least once a month, with latest Docker Engine enabled with windows container capability) if you request nodes with the label docker-windows.

(unless there is an absolute reason to use windows containers in Kubernetes agents?)

@timja
Copy link
Member

timja commented Jan 18, 2022

(unless there is an absolute reason to use windows containers in Kubernetes agents?)

It would be to reproduce the same environment as release.ci so changes in the packaging repository can be tested and known to work when it comes to release time.

but may be possible to work around

@dduportal
Copy link
Contributor

dduportal commented Jan 18, 2022

It would be to reproduce the same environment as release.ci so changes in the packaging repository can be tested and known to work when it comes to release time.

Would it work if we allow release.ci to spawn VM as well as all the others? It would avoid the pains related to kubernetes pod (single images) and would allow to run docker command safely.

[Edit] #2746

@basil
Copy link
Collaborator Author

basil commented Jan 18, 2022

Yes, I intentionally provided the context and high-level problem at the top of this ticket in order to allow for discussion of all possible solutions, not just Kubernetes ones. As Tim wrote, my primary requirements are:

  • Identical code paths for PR testing and production releases (for confidence that changes will not cause regressions)
  • Clean environment for each build to ensure reproducible builds
  • Reliability (new build environments should come up in a reasonable amount of time and with a reasonable success rate)

If we can create a fresh VM for each build that is not reused for any other builds and that comes up in a reasonable amount of time a reasonable percentage of the time, then I am happy to use that.

@slide
Copy link

slide commented Apr 1, 2022

Any updates on this? It would really be good to have PR builds for the MSI.

@basil
Copy link
Collaborator Author

basil commented Apr 27, 2023

No response from the infrastructure team.

@basil basil closed this as not planned Won't fix, can't repro, duplicate, stale Apr 27, 2023
@dduportal
Copy link
Contributor

Reopening as:

  • The requested feature is not implemented and still a legit request
  • The infra team did not respond other than adding this request to the backlog because not the bandwidth

@basil any objection on keeping it opened?

@dduportal dduportal reopened this Apr 27, 2023
@basil
Copy link
Collaborator Author

basil commented Apr 27, 2023

I no longer have an interest in implementing this on the packaging side.

@dduportal dduportal closed this as not planned Won't fix, can't repro, duplicate, stale Jun 30, 2023
@dduportal dduportal removed the triage Incoming issues that need review label Jul 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants