Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkPolicies for System Namespaces #613

Closed

Conversation

danwinship
Copy link
Contributor

For https://issues.redhat.com/browse/RFE-701; add a mode in which system services are protected by NetworkPolicies (in addition to the existing TLS certificate authentication), for "defense in depth".

@danwinship
Copy link
Contributor Author

For https://issues.redhat.com/browse/RFE-701

(There's no actual epic filed for this yet, but since it's going to take several releases to move all the pieces into place I wanted to start early...)

@danwinship
Copy link
Contributor Author

/assign @dcbw @abhat @russellb @squeed @knobunc

@squeed
Copy link
Contributor

squeed commented Mar 22, 2021

Hmm - one thought about how we might be able to incrementally enhance this: all new system namespaces need to have a network policy in them (which we can enforce with an e2e test). This would be regardless of "restricted mode".

as restricted, and CNO would fix things up if they were supposed to be
open.

However, 2b would make upgrades more complicated, since we have to
Copy link
Contributor

@squeed squeed Mar 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to worry about this? The only connections that would be disrupted would be

  1. Connections we somehow missed when writing network polices, and
  2. Only between operator S upgrade and CNO upgrade.

Because option 2-b seems like the best choice. It also gets us closer to a cno-manged "all-namespaces-are-restricted" mode, a.k.a. son-of-Multitenant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't mean "upgrades in general", I meant specifically the case of upgrading from a version of OCP that doesn't implement this feature, to the version of OCP that does, in a cluster where the administrator does not want to be using this feature, and has components of their own that we don't know about accessing random OpenShift components. In that case, during the upgrade, user workloads would be blocked from accessing OpenShift components, possibly creating outages.

If you're going to argue that that's not a real problem, then you're essentially arguing that we don't actually need to preserve the permissive option at all; we can just start switching all components to be restrictive with no way to override it.

@danwinship danwinship force-pushed the system-networkpolicies branch from 593ce2c to a9ca2df Compare April 1, 2021 12:58
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please ask for approval from abhat after the PR has been reviewed.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 30, 2021
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 30, 2021
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 29, 2021

@openshift-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants