-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MGMT-19120: Use service net to connect to hosted API server #7090
base: master
Are you sure you want to change the base?
MGMT-19120: Use service net to connect to hosted API server #7090
Conversation
@jhernand: This pull request references MGMT-19120 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jhernand The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
63c3673
to
390c90d
Compare
@eranco74 can you review this PR? |
@jhernand: This pull request references MGMT-19120 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@jhernand: This pull request references MGMT-19120 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
390c90d
to
3e7178a
Compare
4cd5768
to
92b8bac
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #7090 +/- ##
==========================================
+ Coverage 67.52% 67.63% +0.10%
==========================================
Files 296 296
Lines 40088 40158 +70
==========================================
+ Hits 27071 27160 +89
+ Misses 10574 10548 -26
- Partials 2443 2450 +7
|
/retest-required |
92b8bac
to
7b9e4dc
Compare
internal/spoke_k8s_client/factory.go
Outdated
} | ||
|
||
// SetHubClient sets the client that will be used to call the API of the hub cluster. This is mandatory. | ||
func (b *SpokeK8sClientFactoryBuilder) SetHubClient(value ctrlclient.Client) *SpokeK8sClientFactoryBuilder { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we can't provide client while creating the object?
we have it here https://github.com/openshift/assisted-service/pull/7090/files#diff-891ba2cfffd82a8ae4131c88beb092d2a88149f579c931e3a1f7ca77fbfc82a5L166 no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry my fault missed
https://github.com/openshift/assisted-service/pull/7090/files#diff-c444f711e9191b53952edb65bfd8c644419fc7695c62611dc0fb304b4fb197d6R625
Though it seems like this is a must parameter and we will get error in build if it was not set, so why not to provide it as param to New? Same actually for logger
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a way to make the code cleaner, avoiding long lists of parameters. We could pass the the logger, the client (and the transport wrapper, only used currently for tests) as parameters to the "New..." function, but over time that results in long lists of parameters like this.
api = NewManager(common.GetTestLog(), db, testing.GetDummyNotificationStream(ctrl), mockEventApi, nil, nil, nil, nil, &config, &leader.DummyElector{}, nil, nil, true, nil, nil, false)
It is already useful to avoid setting the transport wrapper parameter to nil.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this is required params and in this case you left them as optional so i don't understand actually why it is good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it is good for several reasons:
-
It is consistent: all the parameters (required or optional) are provided in the same way.
-
It makes it clearer what each parameter means. Not in this case, but if you had two parameters that are strings it is not the same to see this:
whatever, err := NewWhatever("foo", "bar")
Than this:
whatever, err := NewWhatever(().
SetUserName("foo").
SetPassword("bar").
Build()
In the first case you have to deep digger to find out what is the meaning of the parameters, and in the second it is explicit.
-
It gives room for documenting each parameter separately: the documentation goes in the "Set..." method of the builder.
-
It simplifies building the object in multiple steps, if needed, for example:
builder := NewWhatever()
builder.SetUserName("foo")
if shouldUsePassword {
builder.SetPassword("bar")
}
whatever, err := builder.Build()
- It simplifies adding multiple values for the same parameter:
whatever, err := NewWahtever().
SetUserName("foo").
SetUserName("foo-alias").
Build()
- It allows adding new optional parameters without having to change the call sites.
I don't want to bore you with my opinions about this. If you find this unacceptable I will change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idk, maybe it just me, I just believe that if parameter is required it should be provided as part of function call another way if someone will write
whatever, err := NewWahtever().Build()
it will pass compilation but will fail on the run an i think better to find such error in compilation.
Though it is my personal opinion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be sure, i like your proposition i just don't think it should be that way with required params
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your point of view, and still think that the benefits outweigh the drawbacks. As that is not the key point of this pull request I am changing it to a plain list of parameters. We can have this discussion another time.
internal/spoke_k8s_client/factory.go
Outdated
// object reference. So to find the cluster deployment we can get all the instances inside the namespace of the | ||
// secret and then select the first one that references it. | ||
clusterDeploymentList := &hivev1.ClusterDeploymentList{} | ||
err = f.hubClient.List(ctx, clusterDeploymentList, ctrlclient.InNamespace(kubeconfigSecret.Namespace)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't we list with filter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean, can you elaborate? Note that the search criteria here is spec.clusterMetadata.adminKubeconfigSecretRef.Name == ...
, I think searching by that field isn't supported by the API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it can be only one cluster deployment per namespace actually. Don't we have owner ref in the secret for clusterdeployment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say we don't need to rely on that here.
internal/spoke_k8s_client/factory.go
Outdated
return | ||
} | ||
|
||
func (f *spokeK8sClientFactory) CreateFromSecret(ctx context.Context, secret *corev1.Secret) (result SpokeK8sClient, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why better to return result this way?
(result SpokeK8sClient, err error) ?
I believe 99% of the code doesn't do it this way and i just wonder why it is better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is probably a matter of taste. I like to have the names of the return parameters: helps understand what to expect. Not very important in this case as the meaning is very clear. I can change it if you want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it is taste issue :) just most of the code have another style so why to have different styles?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, I will change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
internal/spoke_k8s_client/factory.go
Outdated
if err != nil { | ||
cf.log.WithError(err).Warnf("Getting kuberenetes config for cluster") | ||
return nil, nil, err | ||
func (f *spokeK8sClientFactory) kubeConfigFromSecret(secret *corev1.Secret) (result []byte, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make it common function? We have at least 2 more place that do the same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, at least we have one less place now: I removed similar logic from the spoke client cache in a previous patch. I will try to find where we are doing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will do this in a different patch.
internal/spoke_k8s_client/factory.go
Outdated
// Try to find the cluster deployment. If we can't, for whatever the reason, explain it in the log and assume | ||
// it isn't a hosted cluster. | ||
clusterDeployment, err := f.findClusterDeploymentForKubeconfigSecret(ctx, kubeconfigSecret) | ||
if err != nil || clusterDeployment == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error and not having clusterDeployment seems to be different issues, maybe we should split the logging at least?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
7b9e4dc
to
cb06b53
Compare
log: cf.log, | ||
// findClusterDeploymentForKubeconfigSecret finds the cluster deployment that corresponds to the given kubeconfig | ||
// secret. It returns nil if there is no such cluster deployment. | ||
func (f *spokeK8sClientFactory) findClusterDeploymentForKubeconfigSecret(ctx context.Context, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we ever in a situation where the caller of this factory doesn't already have a reference to the cluster deployment?
Since (based on the naming) we're talking about "spoke" clusters it seems likely that this could be simplified by either the caller supplying the cluster deployment or by this logic living outside this factory (then we would have an option like "useHubServiceNetwork" or something when creating the client).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, here we don't know what is the cluster deployment:
assisted-service/internal/controller/controllers/hypershiftagentserviceconfig_controller.go
Line 342 in 9a1b9ec
spokeClient, err := hr.SpokeClients.Get(kubeconfigSecret) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, thanks.
Side note though ... can we delete the HASC CRD and controller yet?
@gamli75 that effort isn't happening now, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with that effort. maybe @CrystalChun
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with it either, maybe @danielerez?
There are several situations where assisted service needs to connect to the API server of a spoke cluster. To do so it uses the kubeconfig generated during the installation, and that usually contains the external URL of the API server, and that means that the cluster where assisted service runs needs to be configured with a proxy that allows that. But for HyperShift clusters this can be avoided: assisted service can instead connect via the service network, using the `kube-apiserver.my-cluster.svc` host name, as the API server runs as a pod in the same cluster. Doing that reduces the number of round trips and the potential proxy configuration issues. In order to achive that this patch changes the spoke client factory so that it checks if the cluster is a HyperShift cluster, and then it replaces the API server URL with `https://kube-apiserver.my-cluster.svc:6443`. Related: https://issues.redhat.com/browse/MGMT-19120 Signed-off-by: Juan Hernandez <juan.hernandez@redhat.com>
cb06b53
to
4d0cd06
Compare
@jhernand: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There are several situations where assisted service needs to connect to the API server of a spoke cluster. To do so it uses the kubeconfig generated during the installation, and that usually contains the external URL of the API server, and that means that the cluster where assisted service runs needs to be configured with a proxy that allows that. But for HyperShift clusters this can be avoided: assisted service can instead connect via the service network, using the
kube-apiserver.my-cluster.svc
host name, as the API server runs as a pod in the same cluster. Doing that reduces the number of round trips and the potential proxy configuration issues. In order to achieve that this patch changes the spoke client factory so that it checks if the cluster is a HyperShift cluster, and then it replaces the API server URL withhttps://kube-apiserver.my-cluster.svc:6443
.List all the issues related to this PR
https://issues.redhat.com/browse/MGMT-19120
What environments does this code impact?
How was this code tested?
Checklist
docs
, README, etc)Reviewers Checklist