Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Repositories to kcp-Prow #25

Closed
11 of 15 tasks
xrstf opened this issue May 25, 2023 · 18 comments
Closed
11 of 15 tasks

Migrate Repositories to kcp-Prow #25

xrstf opened this issue May 25, 2023 · 18 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nikhita
Copy link
Member

nikhita commented May 25, 2023

Happy to help out here as well :)

@xrstf
Copy link
Contributor Author

xrstf commented May 25, 2023

How can we coordinate on this? I need to setup the Prow stuff and the webhook, but then afterwards anyone can make the PRs to add the .prow.yaml files everywhere.

@nikhita
Copy link
Member

nikhita commented May 25, 2023

How can we coordinate on this? I need to setup the Prow stuff and the webhook

re: webhook - out of curiosity, why aren't we adding the webhook at the org level?

re: setting up prow - #26 takes care of it, right? are there additional things required?

@xrstf
Copy link
Contributor Author

xrstf commented May 25, 2023

re: webhook - out of curiosity, why aren't we adding the webhook at the org level?

Never done that, only recently learned that this works, plus I have no access to org-level settings 😁 In general I don't see an issue with doing it for the entire org.

re: setting up prow - #26 takes care of it, right? are there additional things required?

The other things are

@xrstf
Copy link
Contributor Author

xrstf commented May 25, 2023

In the meantime, for the repos in the checklist above: If there is a dedicated ticket, that means the webhook is setup and someone can pick it up and work on that repo.

@palnabarun
Copy link
Member

How about we centralize the job configuration in this repo? That way all configs stay at the same place and review/approvals can be governed by OWNERS files.

@palnabarun
Copy link
Member

Kubernetes also follows the same approach in kubernetes/test-infra

@xrstf
Copy link
Contributor Author

xrstf commented May 25, 2023

In my experience having jobs in a central location becomes a problem when the repos start branching heavily and maintaining the correct config files for each branch of each repo centrally -- I never liked that and welcomed the in-repo config sooo much.

I also don't think the infra-maintainers should approve Prowjobs of, say, the logicalcluster repo.

@palnabarun
Copy link
Member

palnabarun commented May 25, 2023

Overall, I do agree that both approaches have their pros and cons.

maintaining the correct config files for each branch of each repo centrally

I am not sure I understand this problem very clearly.

I also don't think the infra-maintainers should approve Prowjobs of, say, the logicalcluster repo.

The infra-maintainers don't need to be in the loop for all repos. Maintainers of each repo can themselves approve Prowjobs for their repo jobs.


I am okay with whatever is decided. I wanted to understand out of satisfying curiosity.

@xrstf
Copy link
Contributor Author

xrstf commented May 25, 2023

Oh, maybe I have misrepresented my thoughts, but I am also not completely against centrally managed jobs. A healthy mix is what I would have aimed at, with most focus on in-repo jobs.

I am not sure I understand this problem very clearly.

In our product, we did a few major code reorganizations, where for example a command: ./hack/ci-test-e2e.sh from a Prowjob in product v1.x would need to be command: ./hack/ci/test-e2e.sh in v2.x.

This lead to lots of if [ -f ... ] blocks and impromptu shellscripts in our Prowjobs and it turns out that instead of trying to maintain one set of jobs for all branches centrally in the infra repo, it's easier to just have the jobs for one branch live in that one branch of that repo.

Sure, we could have copied the jobs, or wrote some more tooling to generate them, but this all seemed like even more overhead.

Centrally managed jobs shine often for postsubmits and they are a nice way to impose certain minimum jobs for each repo (like we do here with the validate-prow-yaml) that must pass before in-repo jobs are even started.

@palnabarun
Copy link
Member

Got you! Thanks for the context @xrstf! 💯

@xrstf
Copy link
Contributor Author

xrstf commented May 25, 2023

But I can also see the downsides of in-repo. Like when you need to change that one tiny thing, but in all branches in all of your repos, and you spend the rest of the opening PRs and frying the CI cluster. So yeah, centrally managed jobs can be godsend, too. :)

@nikhita
Copy link
Member

nikhita commented May 27, 2023

Once all repos have been migrated, maybe we could create a single PR to remove the repos from openshift/release to make approvals easier?

@xrstf
Copy link
Contributor Author

xrstf commented May 31, 2023

I fear that if we leave some repos in both Prows, outside contributions might be problematic. Not sure though, but that was my reason for doing it individually per-repo.

@kcp-ci-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kcp-ci-bot kcp-ci-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2024
@kcp-ci-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kcp-ci-bot kcp-ci-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 16, 2024
@kcp-ci-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kcp-ci-bot
Copy link
Contributor

@kcp-ci-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants