Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Git Branching and Release Strategy To Support Multiple k8s Versions #273

Closed
seanmalloy opened this issue May 1, 2020 · 21 comments
Closed
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt.

Comments

@seanmalloy
Copy link
Member

The master branch and the latest release, v0.10.0, are currently built against k8s v1.17 dependencies. At some point in time the master branch will need to be updated to support a new k8s version. Bug fixes and features may need to be back ported to older versions of descheduler that support older versions of k8s. Currently the branching strategy used in this repos does not support this type of requirement.

Questions that need to be answered ...

  • What git branching strategy will be used to support multiple k8s versions?
  • Does the versioning(tagging) standard need to change to support multiple k8s versions?
  • How many k8s versions are supported by this project and for how long?
  • What changes need to be made to CI/CD automation to support the new branching and tagging strategy?
@seanmalloy
Copy link
Member Author

/kind cleanup

Here are my thoughts ...

For each k8s release a "release" branch should be created in the descheduler git repo. For example release-1.17, release-1.18, etc. Features and bug fixes made to the master branch can then be back ported to older releases if necessary by cherry-picking commits to the release branches.

The tagging/versioning scheme for descheduler should be changed to align more closely with k8s release versions. For example the first descheduler release that supports k8s v1.18 should be v0.18.0. Subsequent descheduler releases that supprot k8s v1.18 would have version numbers v0.18.1, v0.18.2, etc. Descheduler release versions would not use v.1x until the API is promoted GA. This versioning strategy is inspired by the k8s cluster autoscaler.

The descheduler should use the same k8s version support policy as the main kubernetes project. This means descheduler would support three k8s versions at a time.

List of CI/CD configs to change and maintain to make this work. This list might be incomplete.

  • Update image build job regex to include "release" branches. See this file
  • The build matrix needs to be maintained in .travis.yml
  • The "release" branches need to have GitHub branch protection enabled when they are created
  • Release documentation should be updated with any required changes.

@ravisantoshgudimetla @aveshagarwal @damemi @ingvagabund PTAL and provide any feedback you have. If this seems reasonable then I propose that a release-1.17 branch should be created. I'm assuming @ravisantoshgudimetla and @aveshagarwal would have access to create this branch.

@k8s-ci-robot k8s-ci-robot added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label May 1, 2020
@damemi
Copy link
Contributor

damemi commented May 1, 2020

+1, would definitely like to see release branching and module tagging consistent with the rest of kubernetes. This would also keep us motivated to consistently rebase on to new k8s versions and make it clear what version we're running

@seanmalloy
Copy link
Member Author

@ravisantoshgudimetla and @aveshagarwal do you have permissions to create branches and enable branch protection?

I think the first release branch that needs to be created should be release-1.17. We can start updating the master branch for k8s v1.18 after the branch is created and branch protection is enabled.

@ingvagabund
Copy link
Contributor

+1, with multiple releases we can also more easily compare changes in behavior between versions. E.g. bug fixes in strategies that might turn into new bugs.

@seanmalloy
Copy link
Member Author

@ravisantoshgudimetla @aveshagarwal @k82cn according to this bit of YAML you are the three people with write and admin access to the descheduler repo. Can one you please create the release-1.17 branch in this repo and enable branch protection? It can be created from the latest commit on master.

Also I propose that we add @damemi to the list of people with write and admin access to this repo now that he is an approver.

@ravisantoshgudimetla
Copy link
Contributor

+1. This actually helps us align with k8s release cadence.

@ravisantoshgudimetla @aveshagarwal @k82cn according to this bit of YAML you are the three people with write and admin access to the descheduler repo

Also I propose that we add @damemi to the list of people with write and admin access to this repo now that he is an approver.

I added @damemi as admin to this repo and quickly realized that the maintainers are getting managed via github teams, so perhaps the best way would be get added to that admin team. I am not sure of the process but IIRC @mrbobbytables has created the team for us when we moved from kube-incubator org(perhaps via the YAML you pointed @seanmalloy). So best thing would be to reach out to them and ask for the process. I am fine with making Mike admin of this repo.

Can one you please create the release-1.17 branch in this repo and enable branch protection? It can be created from the latest commit on master.

Done, I created it and enabled branch protection. I also realized that on master branch, we have not made travis a required test. I changed it now. Please look at it and let me know if you need anything else for me or @aveshagarwal

@damemi
Copy link
Contributor

damemi commented May 12, 2020

@seanmalloy @ravisantoshgudimetla now that we've cut a release-1.17 branch, should we also go about updating to k8s 1.18 and then cut a release-1.18 branch? It looks like we missed updating to 1.18 with that release so we have some catching up to do

@seanmalloy
Copy link
Member Author

We should update the master branch to use k8s 1.18.

In my opinion we can wait until k8s 1.19 is released before creating the 1.18 release branch.

@damemi
Copy link
Contributor

damemi commented May 12, 2020

We should update the master branch to use k8s 1.18.

Right, that's what I meant. I think we could create the 1.18 release branch now though (after we bump master), since 1.18 has been GA for a while, and then work on a release-1.19 branch soon after the release of 1.19. At least, this is the pattern most of the other repos follow

@seanmalloy
Copy link
Member Author

I think we could create the 1.18 release branch now though

Either way works for me. As you mentioned we should probably follow what the other k8s projects do.

@damemi
Copy link
Contributor

damemi commented May 12, 2020

@seanmalloy just to clarify by "now" I meant after we bump master to 1.18 of course

@seanmalloy
Copy link
Member Author

@damemi makes sense to me.

@damemi
Copy link
Contributor

damemi commented May 12, 2020

1.18 bump opened here: #280

@mrbobbytables
Copy link
Member

Just to chime in on the teams, you are correct in that PR should be opened against k/org with the team changes. 👍

@damemi
Copy link
Contributor

damemi commented May 12, 2020

@ravisantoshgudimetla @seanmalloy I would gladly accept the responsibilities of admin/write access to help with this in the future. Thanks @mrbobbytables, I've opened a PR adding myself to that yaml section here: kubernetes/org#1874

@seanmalloy
Copy link
Member Author

seanmalloy commented May 13, 2020

I opened PR kubernetes/test-infra#17587 to enable automated container image builds for release branches.

@damemi
Copy link
Contributor

damemi commented May 13, 2020

@seanmalloy thanks, should we also add a v0.11.0 tag for release-1.17 and promote the gcr.io prod image as well?

After 1.17 is set, we can cut the 1.18 branch (and, subsequently, a matching v0.12.0 tag to go with it and promoted image).

Then we are all caught-up and (ideally) from that point forward we'll only need to cut, tag, and promote once per release. We just fell 2 releases behind

@seanmalloy
Copy link
Member Author

@damemi I agree it is time to create some more releases.

For the release-1.17 branch I created #283 to bump the dependencies to k8s 1.17.5. After that I think you are clear to create a new release from the release-1.17 branch.

I'm not aware of any additional changes that are needed prior to creating a new release from the release-1.18 branch.

Regarding the versioning scheme for the tags going forward. I think we should consider matching the descheduler minor version to the supported k8s minor version. This means we would release descheduler version v0.17.0 from the release-1.17 branch and descheduler version v0.18.0 from the release-1.18 branch. In my opinion this will make it easier for end users to match the descheduler version to the supported k8s version.

Also, let me know if you want any help creating release notes.

@damemi
Copy link
Contributor

damemi commented May 14, 2020

@seanmalloy I agree, I think it would make since to match our tags up with the rest of k8s for sanity. As long as there's no problem with skipping up to v0.17.0

I've opened #284 to summarize the proposed release branching schedule we've discussed here for 1.19 (and going forward) and tagged the rest of the descheduler folks. If we have a consensus I'll go forward with that, but still open to any changes if anyone has them.

@seanmalloy
Copy link
Member Author

/close

I confirmed that the automated container images builds for release-.* branches are working. Here is a Prow job that ran successfully for the release-1.17 branch.

Travis CI is working for release branches. The Travis CI build matrix was updated on the master branch for k8s 1.18 and is working. All the the descheduler approvers have access to create new release branches and enable branch protection when needed.

I'm closing this issue. It looks like any remaining follow up are now being tracked in #284.

@k8s-ci-robot
Copy link
Contributor

@seanmalloy: Closing this issue.

In response to this:

/close

I confirmed that the automated container images builds for release-.* branches are working. Here is a Prow job that ran successfully for the release-1.17 branch.

Travis CI is working for release branches. The Travis CI build matrix was updated on the master branch for k8s 1.18 and is working. All the the descheduler approvers have access to create new release branches and enable branch protection when needed.

I'm closing this issue. It looks like any remaining follow up are now being tracked in #284.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt.
Projects
None yet
Development

No branches or pull requests

6 participants