Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 book: add ipam contract #10108

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

schrej
Copy link
Member

@schrej schrej commented Feb 6, 2024

What this PR does / why we need it:
Adds the IPAM contract to the book.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

/area ipam
/area documentation

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. area/ipam Issues or PRs related to ipam area/documentation Issues or PRs related to documentation cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Feb 6, 2024
@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Feb 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2024
@sbueringer
Copy link
Member

@schrej Still interested in this one?

@schrej
Copy link
Member Author

schrej commented May 7, 2024

Yes, just forgot about it... I think I put it on hold due to some uncertainty with regards to clusterctl move. And now I also need to update regarding spec.clusterName instead of the annotation.

@schrej
Copy link
Member Author

schrej commented May 7, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 7, 2024
@enxebre
Copy link
Member

enxebre commented May 31, 2024

@schrej is this still wip? lgtm as good starting point

@sbueringer
Copy link
Member

@schrej friendly reminder 😀

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign sbueringer for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@schrej schrej marked this pull request as ready for review July 31, 2024 14:35
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 31, 2024
@schrej
Copy link
Member Author

schrej commented Jul 31, 2024

I think this should be ready now. I've updated it to include the new spec.clusterName field and noted that the cluster name label is deprecated.

Sorry that it took so long.

@sbueringer
Copy link
Member

@lubronzhan Do you maybe have some time to take a first look? (in case you're familiar with what we did in CAPV at the time)

Copy link
Contributor

@lubronzhan lubronzhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply. Overall LGTM!

Copy link
Member

@sbueringer sbueringer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx! Just a few small findings.

docs/book/src/developer/providers/ipam.md Outdated Show resolved Hide resolved
docs/book/src/developer/providers/ipam.md Show resolved Hide resolved
docs/book/src/developer/providers/ipam.md Outdated Show resolved Hide resolved
docs/book/src/developer/providers/ipam.md Show resolved Hide resolved
docs/book/src/developer/providers/ipam.md Outdated Show resolved Hide resolved
docs/book/src/developer/providers/ipam.md Outdated Show resolved Hide resolved

1. Create an IPAddressClaim
1. The `spec.poolRef` must reference the pool you want to use
2. It should have an owner reference to the infrastructure Machine it is created for (required to support `clusterctl move`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about controller / blockOwnerDeletion here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. We're currently investigating an issue when deleting metal3 based clusters where claims are deadlocked due to the cluster vanishing before they are cleaned up. The paused check prevents the claim from being released if the cluster can't be found. We'll either have to make sure that the cluster doesn't get deleted, or ignore the paused check when releasing addresses.

Copy link
Member Author

@schrej schrej Sep 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I propose to merge this as is, and we'll update it if we change how this works in the in-cluster ipam implementation.
I've added a note to the ipam issue: kubernetes-sigs/cluster-api-ipam-provider-in-cluster#289

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 19, 2024
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 20, 2024
@fabriziopandini
Copy link
Member

Nice!
Happy to lgtm/approve once findings are addressed

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 20, 2024
# Conflicts:
#	docs/book/src/developer/providers/contracts.md
@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Sep 20, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/documentation Issues or PRs related to documentation area/ipam Issues or PRs related to ipam cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants