-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support running with restricted PSA enforcement enabled (part 1) #2572
Changes from 5 commits
45d3af0
cf9715f
ff5c24b
9e83262
e45cc5a
0671525
cbc48f7
2790c63
7b2e6d3
2b2eefb
d0af4f0
fb98d10
57c89ab
b8c37fb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -44,8 +44,11 @@ type ConnectHelper struct { | |
// ReleaseName is the name of the Consul cluster. | ||
ReleaseName string | ||
|
||
// Ctx is used to deploy Consul | ||
Ctx environment.TestContext | ||
Cfg *config.TestConfig | ||
// AppCtx is used to deploy applications. If nil, then Ctx is used. | ||
AppCtx environment.TestContext | ||
Cfg *config.TestConfig | ||
|
||
// consulCluster is the cluster to use for the test. | ||
consulCluster consul.Cluster | ||
|
@@ -82,6 +85,14 @@ func (c *ConnectHelper) Upgrade(t *testing.T) { | |
c.consulCluster.Upgrade(t, c.helmValues()) | ||
} | ||
|
||
// appCtx returns the context where apps are deployed. | ||
func (c *ConnectHelper) appCtx() environment.TestContext { | ||
if c.AppCtx != nil { | ||
return c.AppCtx | ||
} | ||
return c.Ctx | ||
} | ||
|
||
// DeployClientAndServer deploys a client and server pod to the Kubernetes | ||
// cluster which will be used to test service mesh connectivity. If the Secure | ||
// flag is true, a pre-check is done to ensure that the ACL tokens for the test | ||
|
@@ -108,23 +119,47 @@ func (c *ConnectHelper) DeployClientAndServer(t *testing.T) { | |
|
||
logger.Log(t, "creating static-server and static-client deployments") | ||
|
||
k8s.DeployKustomize(t, c.Ctx.KubectlOptions(t), c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-server-inject") | ||
if c.Cfg.EnableTransparentProxy { | ||
k8s.DeployKustomize(t, c.Ctx.KubectlOptions(t), c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-client-tproxy") | ||
ctx := c.appCtx() | ||
opts := ctx.KubectlOptions(t) | ||
|
||
if c.Cfg.EnableCNI && c.Cfg.EnableOpenshift { | ||
// On OpenShift with the CNI, we need to create a network attachment definition in the namespace | ||
// where the applications are running, and the app deployment configs need to reference that network | ||
// attachment definition. | ||
|
||
// TODO: A base fixture is the wrong place for these files | ||
k8s.KubectlApply(t, opts, "../fixtures/bases/openshift/") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead of using the AppNamespace flag, you could set the namespace here using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not really clear what you mean. This is in Right now, it's using a namespace sourced from the test flags via the context (bc that's what I set up). Do you mean I should instead have each test pass a namespace to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried something else:
This way there is a shared/common way to easily create an app namespace that tests can use if needed and there are no more I also added a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (Also, I accidentally did all of this while rebasing, so the latest commit that looks like a merge from main actually contains the changes - sorry!) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like the |
||
helpers.Cleanup(t, c.Cfg.NoCleanupOnFailure, func() { | ||
k8s.KubectlDelete(t, opts, "../fixtures/bases/openshift/") | ||
}) | ||
|
||
k8s.DeployKustomize(t, opts, c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-server-openshift") | ||
curtbushko marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if c.Cfg.EnableTransparentProxy { | ||
k8s.DeployKustomize(t, opts, c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-client-openshift-tproxy") | ||
} else { | ||
k8s.DeployKustomize(t, opts, c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-client-openshift-inject") | ||
} | ||
} else { | ||
k8s.DeployKustomize(t, c.Ctx.KubectlOptions(t), c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-client-inject") | ||
} | ||
k8s.DeployKustomize(t, opts, c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-server-inject") | ||
if c.Cfg.EnableTransparentProxy { | ||
k8s.DeployKustomize(t, opts, c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-client-tproxy") | ||
} else { | ||
k8s.DeployKustomize(t, opts, c.Cfg.NoCleanupOnFailure, c.Cfg.DebugDirectory, "../fixtures/cases/static-client-inject") | ||
} | ||
|
||
} | ||
// Check that both static-server and static-client have been injected and | ||
// now have 2 containers. | ||
retry.RunWith( | ||
&retry.Timer{Timeout: 30 * time.Second, Wait: 100 * time.Millisecond}, t, | ||
func(r *retry.R) { | ||
for _, labelSelector := range []string{"app=static-server", "app=static-client"} { | ||
podList, err := c.Ctx.KubernetesClient(t).CoreV1().Pods(c.Ctx.KubectlOptions(t).Namespace).List(context.Background(), metav1.ListOptions{ | ||
LabelSelector: labelSelector, | ||
FieldSelector: `status.phase=Running`, | ||
}) | ||
podList, err := ctx.KubernetesClient(t).CoreV1(). | ||
Pods(opts.Namespace). | ||
List(context.Background(), metav1.ListOptions{ | ||
LabelSelector: labelSelector, | ||
FieldSelector: `status.phase=Running`, | ||
}) | ||
require.NoError(r, err) | ||
require.Len(r, podList.Items, 1) | ||
require.Len(r, podList.Items[0].Spec.Containers, 2) | ||
|
@@ -136,7 +171,7 @@ func (c *ConnectHelper) DeployClientAndServer(t *testing.T) { | |
// and intentions. This helper is primarly used to ensure that the virtual-ips are persisted to consul properly. | ||
func (c *ConnectHelper) CreateResolverRedirect(t *testing.T) { | ||
logger.Log(t, "creating resolver redirect") | ||
options := c.Ctx.KubectlOptions(t) | ||
options := c.appCtx().KubectlOptions(t) | ||
kustomizeDir := "../fixtures/cases/resolver-redirect-virtualip" | ||
k8s.KubectlApplyK(t, options, kustomizeDir) | ||
|
||
|
@@ -149,10 +184,12 @@ func (c *ConnectHelper) CreateResolverRedirect(t *testing.T) { | |
// server fails when no intentions are configured. | ||
func (c *ConnectHelper) TestConnectionFailureWithoutIntention(t *testing.T) { | ||
logger.Log(t, "checking that the connection is not successful because there's no intention") | ||
ctx := c.appCtx() | ||
opts := ctx.KubectlOptions(t) | ||
if c.Cfg.EnableTransparentProxy { | ||
k8s.CheckStaticServerConnectionFailing(t, c.Ctx.KubectlOptions(t), StaticClientName, "http://static-server") | ||
k8s.CheckStaticServerConnectionFailing(t, opts, StaticClientName, "http://static-server") | ||
} else { | ||
k8s.CheckStaticServerConnectionFailing(t, c.Ctx.KubectlOptions(t), StaticClientName, "http://localhost:1234") | ||
k8s.CheckStaticServerConnectionFailing(t, opts, StaticClientName, "http://localhost:1234") | ||
} | ||
} | ||
|
||
|
@@ -177,11 +214,13 @@ func (c *ConnectHelper) CreateIntention(t *testing.T) { | |
// static-client pod once the intention is set. | ||
func (c *ConnectHelper) TestConnectionSuccess(t *testing.T) { | ||
logger.Log(t, "checking that connection is successful") | ||
ctx := c.appCtx() | ||
opts := ctx.KubectlOptions(t) | ||
if c.Cfg.EnableTransparentProxy { | ||
// todo: add an assertion that the traffic is going through the proxy | ||
k8s.CheckStaticServerConnectionSuccessful(t, c.Ctx.KubectlOptions(t), StaticClientName, "http://static-server") | ||
k8s.CheckStaticServerConnectionSuccessful(t, opts, StaticClientName, "http://static-server") | ||
} else { | ||
k8s.CheckStaticServerConnectionSuccessful(t, c.Ctx.KubectlOptions(t), StaticClientName, "http://localhost:1234") | ||
k8s.CheckStaticServerConnectionSuccessful(t, opts, StaticClientName, "http://localhost:1234") | ||
} | ||
} | ||
|
||
|
@@ -192,8 +231,11 @@ func (c *ConnectHelper) TestConnectionFailureWhenUnhealthy(t *testing.T) { | |
// Test that kubernetes readiness status is synced to Consul. | ||
// Create a file called "unhealthy" at "/tmp/" so that the readiness probe | ||
// of the static-server pod fails. | ||
ctx := c.appCtx() | ||
opts := ctx.KubectlOptions(t) | ||
|
||
logger.Log(t, "testing k8s -> consul health checks sync by making the static-server unhealthy") | ||
k8s.RunKubectl(t, c.Ctx.KubectlOptions(t), "exec", "deploy/"+StaticServerName, "--", "touch", "/tmp/unhealthy") | ||
k8s.RunKubectl(t, opts, "exec", "deploy/"+StaticServerName, "--", "touch", "/tmp/unhealthy") | ||
|
||
// The readiness probe should take a moment to be reflected in Consul, | ||
// CheckStaticServerConnection will retry until Consul marks the service | ||
|
@@ -205,20 +247,20 @@ func (c *ConnectHelper) TestConnectionFailureWhenUnhealthy(t *testing.T) { | |
// other tests. | ||
logger.Log(t, "checking that connection is unsuccessful") | ||
if c.Cfg.EnableTransparentProxy { | ||
k8s.CheckStaticServerConnectionMultipleFailureMessages(t, c.Ctx.KubectlOptions(t), StaticClientName, false, []string{ | ||
k8s.CheckStaticServerConnectionMultipleFailureMessages(t, opts, StaticClientName, false, []string{ | ||
"curl: (56) Recv failure: Connection reset by peer", | ||
"curl: (52) Empty reply from server", | ||
"curl: (7) Failed to connect to static-server port 80: Connection refused", | ||
}, "", "http://static-server") | ||
} else { | ||
k8s.CheckStaticServerConnectionMultipleFailureMessages(t, c.Ctx.KubectlOptions(t), StaticClientName, false, []string{ | ||
k8s.CheckStaticServerConnectionMultipleFailureMessages(t, opts, StaticClientName, false, []string{ | ||
"curl: (56) Recv failure: Connection reset by peer", | ||
"curl: (52) Empty reply from server", | ||
}, "", "http://localhost:1234") | ||
} | ||
|
||
// Return the static-server to a "healthy state". | ||
k8s.RunKubectl(t, c.Ctx.KubectlOptions(t), "exec", "deploy/"+StaticServerName, "--", "rm", "/tmp/unhealthy") | ||
k8s.RunKubectl(t, opts, "exec", "deploy/"+StaticServerName, "--", "rm", "/tmp/unhealthy") | ||
} | ||
|
||
// helmValues uses the Secure and AutoEncrypt fields to set values for the Helm | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want a bunch of these flags/settings as top level flags as I could see them having unintended consequences. For example, AppNamespace/SecondaryAppNamespace; I am sure that if you set this globably in CI and started installing static-client/static-server in different namespaces that a large chunk of the tests will fail.
Now you need to modify all the tests to basically handle
-n
in thekubectl apply
. Not really a bad thing to handle overall but probalby not a framework change you want in this PR.I'd make AppNamespace/SecondaryAppNamespace test specific or OpenShift specific if needed.
For CNINamespace, we made the design decision that CNI was always installed alongside consul. Othere CNI plugins do it differently and install their plugins in their own namespace. I don't think OpenShift needs the CNI to be installed somewhere else to work? If you want it for setting SCC parameters then you should be able to use {{ release-name }} from helm maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. But if someone wants to run Consul in a namespace with restricted PSA enforcement, then they can't run the CNI in that same namespace because the CNI requires privilege. Eventually, (we assume) restricted PSA enforcement is going to be the OpenShift default, so the goal here was to enable testing with that potential future configuration.
This is still relevant to vanilla K8s because you can enable restricted PSA enforcement there as well (although it likely will never be the default like on OpenShift). So I didn't want to make the app namespace settings OpenShift-specific, because it's useful and (and easier) to be able to test this same configuration on vanilla K8s as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another aspect of this choice is that I eventually (if I had infinite time) want to run all existing tests with PSA enforcement enabled, so I wanted a way that could apply to all tests, so that every test could deploy Consul and the CNI and apps in a compatible way. Then I can run the entire suite for coverage.
That is more of a dream at this point, because I still have to touch every test and it's unlikely I'd finish that work. But I haven't come up with a way to test with restricted PSA enforcement enabled that doesn't involve touching potentially every test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I replaced all these flags with
-restricted-psa-enforcement-enabled
(default false). When true:-kube-namespaces
need to be configured with the restricted PSA enforcement label before tests are run.kube-system
namespace, which is privileged and already exists (so I removed the -cni-namespace flag)UseAppNamespace
setting that tests can optionally use (based on whether-restricted-psa-enforcement-enabled
is set)Let me know what you think.