Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dex + istio: make service type configurable #558

Closed

Conversation

yanniszark
Copy link
Contributor

@yanniszark yanniszark commented Oct 22, 2019

Issue:
Related #566

Description of your changes:
This PR exposes the service type of the Istio IngressGateway and Dex services to make them configurable.
It also removes hardcoded nodePort values, as it was necessary for ClusterIP to work.
In addition, Kubernetes can choose them at runtime and it makes the configs more portable.

/cc @elikatsis @lluunn @kkasravi @krishnadurai @jlewi @ryandawsonuk

Checklist:

  • Unit tests have been rebuilt:
    1. cd manifests/tests
    2. make generate
    3. make test

This change is Reviewable

@jlewi
Copy link
Contributor

jlewi commented Oct 22, 2019

@yanniszark Who should review this PR?

@ryandawsonuk
Copy link

/lgtm
/approve

Copy link
Contributor

@krishnadurai krishnadurai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yanniszark I recommend that this choice between Service and NodePort be implemented as an overlay rather than parameters.
Some KfDef configs and their instructions do work on these port values and it might become hard to see the impact of this change at this time of release.

@@ -309,4 +313,5 @@ spec:
name: seldon-core-operator
repos:
- name: manifests
uri: https://github.com/kubeflow/manifests/archive/master.tar.gz
# uri: https://github.com/kubeflow/manifests/archive/master.tar.gz
uri: file:///home/yannis/go/src/github.com/kubeflow/manifests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Replace with master

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha I keep forgetting this. Thanks for the catch 😄

@yanniszark
Copy link
Contributor Author

Some KfDef configs and their instructions do work on these port values and it might become hard to see the impact of this change at this time of release.

I thought of that but couldn't find any references to these ports.
The only reference I could find was in the Google IAP docs:

ingress.kubernetes.io/backends:               {"k8s-be-31380--5e1566252944dfdb":"HEALTHY","k8s-be-32133--5e1566252944dfdb":"HEALTHY"}

However, this is runtime information that is automatically discovered.
It would match whatever port was chosen.
/cc @lluunn @jlewi

@krishnadurai are you aware of any config depending on those values?

@jlewi
Copy link
Contributor

jlewi commented Oct 23, 2019

@yanniszark Yes. The GCP IAP configs currently have an assumption that the node-port is 31380.

These changes to ISTIO feel like a risky a change to me for 0.7.0. Was the intent to cherry-pick this onto 0.7.0?

@jlewi
Copy link
Contributor

jlewi commented Oct 23, 2019

The PR description doesn't link to an issue; and I'm not seeing any issue in
https://github.com/orgs/kubeflow/projects/22

So my assumption is that this not release blocking. Is that correct?

@yanniszark
Copy link
Contributor Author

yanniszark commented Oct 23, 2019

@jlewi yes, this is not release-blocking.

This PR is related to a recent user request, I opened #566 to track.

Yes. The GCP IAP configs currently have an assumption that the node-port is 31380.

Could you point me to where this assumption is made?
A quick git grep in manifests and kubeflow doesn't reveal anything.

@jlewi
Copy link
Contributor

jlewi commented Oct 23, 2019

@yanniszark I would hope we aren't depending on the node port. On GCP we need to map K8s services to GCP backend services and to do this we need to map services to node ports.
So I thought we might be depending on the node port somewhere. I thought we were doing it here

PROJECT=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/project/project-id)

but I could be wrong.

Either way we should be able to fix it without too much trouble. I just don't want to try do it before 0.7.0

@yanniszark
Copy link
Contributor Author

@jlewi IAP doesn't directly use the nodePort from what I see.
The IngressGateway Service has a BackendConfig annotation, which connects the port to the right BackendConfig for IAP:

beta.cloud.google.com/backend-config: '{"ports": {"http2":"iap-backendconfig"}}'

As such, there should be no trouble doing this change.
The E2E tests are also passing without any trouble.

Ideally, we'd like to have it for v0.7 as it would make the default installation more secure.
What would you suggest in order to achieve that but not disrupt GCP (although I don't believe that it will be disrupted).

@jlewi
Copy link
Contributor

jlewi commented Oct 24, 2019

@yanniszark That's great to hear. Nonetheless, I think for 0.7.0 we want to be extremely risk adverse. Changes to ISTIO have a pretty wide blast area. I pushed back pretty hard on the KFServing folks who wanted to do even a patch bump for ISTIO in 0.7.0.

Our goal is to finalize 0.7.0 this week so I think we want to be pretty aggressive in pairing down any additional changes.

My suggestion is to target 0.7.1 or 0.7.X. If you wanted to you could start creating a
kfctl_.0.7.1-preview.yaml or kfctl_.0.7.1-unstable.yaml.

The caveat though is that we want to do this in a way that makes it very easy to audit that no changes impacting 0.7.0 are going; i.e. basically don't modify any of the existing istio files.

@krishnadurai Is working on getting a 1.3.1 ISTIO install ready and this will be in a different directory e.g. istio-1.3.1 vs. istio so that the two can coexist easily.

So my suggestion would be to target these istio changes for the 1.3.1 istio install. We can then easily cherry-pick those changes onto the 0.7.0 branch and start doing a 0.7.1-unstable.yaml file.

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>
@yanniszark yanniszark force-pushed the feature-dex-istio-service-types branch from c55c99a to bb87e13 Compare March 16, 2020 13:21
@k8s-ci-robot k8s-ci-robot removed the lgtm label Mar 16, 2020
@k8s-ci-robot
Copy link
Contributor

New changes are detected. LGTM label has been removed.

@yanniszark
Copy link
Contributor Author

yanniszark commented Mar 16, 2020

@krishnadurai @jlewi resurrecting this old PR.
As I mentioned before, it seems that on the GCP side, those values aren't necessary. Ports are referenced by name.
Hardcoding nodePorts is a bad practice since it makes these Services not portable.

@krishnadurai on the istio_dex side, the dex plugin should alleviate any issues.
Besides, relying on hardcoded nodePorts is not a good practice.
cc @ryandawsonuk who had reviewed

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ryandawsonuk
To complete the pull request process, please assign yanniszark
You can assign the PR to them by writing /assign @yanniszark in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@jlewi jlewi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 7 of 13 files at r1, 2 of 6 files at r2, 25 of 25 files at r3.
Reviewable status: all files reviewed, 8 unresolved discussions (waiting on @elikatsis, @kkasravi, @krishnadurai, @lluunn, and @yanniszark)


dex-auth/dex-crds/base/params.env, line 12 at r3 (raw file):

oidc_redirect_uris=['http://login.example.org:5555/callback/onprem-cluster']
application_secret=pUBnBOY80SnXgjibTYM9ZWNzY2xreNGQok
service_type=NodePort

Why is this a parameter as opposed to an overlay that applies a patch that sets the service type?

I think we are abusing vars a bit in our kustomization patterns.
See for example: kubernetes-sigs/kustomize#2052


dex-auth/dex-crds/base/params.yaml, line 6 at r3 (raw file):

- path: data/config.yaml
  kind: ConfigMap
- path: spec/type

Per my other comment are we abusing vars here?


dex-auth/dex-crds/base/service.yaml, line 6 at r3 (raw file):

  name: dex
spec:
  type: $(service_type)

Why are you making this a var? And setting the default var to NodePort?
If people wanted to override this couldn't they do this by defining an appropriate overlay?


istio/istio-install/base/istio-noauth.yaml, line 14047 at r3 (raw file):

    istio: ingressgateway
spec:
  type: $(service_type)

Why are you making this a var?


istio/istio-install/base/kustomization.yaml, line 45 at r3 (raw file):

  env: params.env
generatorOptions:
  disableNameSuffixHash: true

Why are you disabling the service suffix?


istio/istio-install/base/kustomization.yaml, line 47 at r3 (raw file):

  disableNameSuffixHash: true
vars:
- name: service_type

Per other comments why are we making service type a var as opposed to using an overlay applying a patch when needed?


istio/istio-install/base/params.env, line 1 at r3 (raw file):

service_type=NodePort

Per other comments why are we making this a params/vars?

@jlewi
Copy link
Contributor

jlewi commented Mar 16, 2020

Thanks @yanniszark LGTM to removing the hard coded node ports. I agree that's a good thing.

I'm less convinced though about using a "vars" to make the service type configurable. I think that is a misuse of the vars pattern.
Per kubernetes-sigs/kustomize#2052 there's a rationale that vars should go away altogether.

It seems like if people wanted to configure the ServiceType they could do this by defining an overlay and patch.

@yanniszark
Copy link
Contributor Author

@jlewi I'm not a huge fan of vars either.
The reason I am doing this way in this PR is because:

  1. I'm following the current line of making kustomizations in kubeflow.
  2. Vars allow the user to easily customize their configuration in a central place, because of the kfdef tooling we have built around them.

Especially for (2), there is currently no tooling for customizing using patches.
How do we propose a user makes changes to the kustomizations?
Always fork the manifests repo?
So I think this is an argument bigger than this PR.

If we have a new recommended way of doing things, then I'm all for doing it that way.
Otherwise, it becomes a question of do we apply this PR the current way or wait until the new recommended way is formed?

Btw, I know you're working on this and we'd love to follow a way closer to the kustomize principles.
What do you think we should do for this PR?

@jlewi
Copy link
Contributor

jlewi commented Mar 16, 2020

@yanniszark KFDef supports overlays. So in a KFDef you can refer to the overlay that you need.

Its already an overlay
istio/istio-install/base

So you can just have other overlays adding a patch to change it to clusterip
istio/istio-install/cluster-ip

You then have two options

  1. In KFDef you could just refer to that overlay in the repoRef path
  2. You could use the overlays feature

I'm following the current line of making kustomizations in kubeflow.

I don't think vars were intended for this type of kustomization; I think it was always intended to use overlays; that's why KFDef supports combining overlays in KFDef.

I think the primary use case for vars that wasn't covered by other patterns was setting things like clusterDomain in virtual services

host: jupyter-web-app-service.$(namespace).svc.$(clusterDomain)

This isn't well handled by overlays because we have N services that all need to have the same substitution. Defining N overlays would have been very cumbersome. So we fell back on using vars as a way to centrally define them.

The use of vars is a major blocker (see e.g. #1007).

So I would prefer not to create more instances of vars that will need to be removed later if there are alternate solutions.

Copy link
Contributor

@krishnadurai krishnadurai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yanniszark we could get rid of the hard-coded NodePorts. Which will resolve #587 where there is a clash of NodePorts.

With regard to configuring services as LoadBalancer or NodePort types - let's use overlays as @jlewi has explained the new standards we are looking to implement.

@yanniszark
Copy link
Contributor Author

@krishnadurai @jlewi thanks for your input!
It's true that right now the user can specify multiple overlays because of kfctl's magic.

However, the multiple overlays method doesn't work in vanilla kustomize, for combining patches.
For example let's say that we have an overlay for servicetype (ClusterIP) and an overlay for serviceport. Both of those import the same base. You cannot combine these overlays in vanilla kustomize.
This is why we have opened this issue: kubernetes-sigs/kustomize#1251 (comment)

Another question is, how will the user select the ServiceType?
With vars, this is straightforward.
With overlays, should I make 3 different overlays (ClusterIP, NodePort, LoadBalancer)? That doesn't seem very scalable.

@jlewi I know you've been looking into this and I'm super excited about that, it would be great if you could share your thoughts, e.g., on today's community meeting.

@jlewi
Copy link
Contributor

jlewi commented Mar 17, 2020

@yanniszark I have been working on a doc that I will share later this week.

kustomize does support combining patches. That's actually what kfctl is doing. If you look at the overlays we are combining e.g. "istio" and "application" in katib
https://github.com/kubeflow/manifests/blob/master/katib/katib-controller/overlays/istio/katib-ui-virtual-service.yaml

This is a patch. i.e if you look at the kustomization.yaml file that kfctl generates it lists the virtual-service.yaml as a patch
https://gist.github.com/jlewi/e7787ed92b58a35285d9d1c3a84e63e5

If it was an overlay the kustomization.yaml would refer to the directory containing the kustomization.yaml of the overlay in the resources section e.g.

resources:
  - overlays/istio

So the natural way in kustomize to do this would be

  • Predefine patches (e.g.one for ClusterIP, NodePort, LoadBalancer)
  • Combine those patches in various overlays.

e.g. suppose we had different overlays for GCP and DEX but both wanted to reuse the ClusterIP patch. Then we could just define those .yaml file for the patches once and refer to them in the kustomization.yaml as patches.

That should be compatible with what kfctl does today.

Define multiple overlays for ClusterIP, NodePort,Loadbalancer

Then in KFDef a user would just list the overlays they want in the overlay section of the application. kfctl would then generate a kustomization file that lists the YAML file in the patchesStrategicMerge

@yanniszark
Copy link
Contributor Author

@yanniszark I have been working on a doc that I will share later this week.

That would be amazing!

kustomize does support combining patches. That's actually what kfctl is doing. If you look at the overlays we are combining e.g. "istio" and "application" in katib

kustomize supports combining patches in a single base.
In your example (https://gist.github.com/jlewi/e7787ed92b58a35285d9d1c3a84e63e5), the kustomization file is constructed by the kfctl "magic", by analyzing the kustomization tree and listing all patches on the same base.
I thought we wanted to move away from this magic kustomization generation, no ?

If you defined a kustomization which listed two overlays with patches as resources and those overlays had the same base, it wouldn't work. This is the composition pattern that kustomize doesn't support (at least for the patches case).
For example, this:

resources:
- overlays/clusterip
- overlays/serviceport

would not work if they import the same base (assuming clusterip and serviceport overlays define patches on the common base).

@jlewi
Copy link
Contributor

jlewi commented Mar 17, 2020

@yanniszark yes we are moving away from kfctl kustomize generation magic. My point is that kustomize supports the features we want so there's no reason to rely on kfctl to generate the kustomization.yaml files. We could instead just check them in.

The example you give

resources:
- overlays/clusterip
- overlays/serviceport

won't work because we end up defining the same resources twice. But the following would work

patchesStrategicMerge:
- overlays/clusterip/service.yaml
- overlays/serviceport/service.yaml

This is in fact what kfctl is doing. It has some complex logic to figure out whether an "overlay" is really a "patch" and list it in the patches section or if its a "base" defining new resources (as in the application/istio overlays) and listing it in base or overlays.

So my proposal for making the service type configurable (#566):

  • Define three overlays for ClusterIP, ServiceType, LoadBalancer
    • When not using "kfctl" we will treat these as patches not bases to be compatible with builtin kustomize features
    • With kfctl we will write the "kustomization.yaml" files however we need to make the magic work
  • Tell the user to list the appropriate overlay in the overlays section

@stale
Copy link

stale bot commented Jun 15, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in one week if no further activity occurs. Thank you for your contributions.

@stale
Copy link

stale bot commented Jun 23, 2020

This issue has been closed due to inactivity.

@stale stale bot closed this Jun 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants