-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stability: iptables analysis #268
stability: iptables analysis #268
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
## Proposal | ||
|
||
1. Run a daemonset on every master that records iptables rules into a configmap on some reasonable cadence and expiry policy. | ||
2. Write an analyzer binary that checks to see how far out of date they are. This involves... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to assume a world in which the monitoring daemonset is 100% reliably able to generate the expected current iptables state, but kube-proxy is not...
you can determine roughly how latent the iptables rules are. | ||
3. Write summaries into some resource which you can then report metrics and degraded conditions against. | ||
|
||
### User Stories [optional] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like the "user stories" for all of these enhancements should be links to the sorts of bugs that the enhancement is supposed to help debug/prevent. Then we can compare the Proposal against what the actual problem was to see if it would really have helped.
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This helps identify and root cause one common problem for "ingress for X is down!"