Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installs kubeflow-gateway in ISTIO namespace #1211

Closed
wants to merge 2 commits into from

Conversation

shawnzhu
Copy link
Member

@shawnzhu shawnzhu commented May 29, 2020

Which issue is resolved by this Pull Request:
Related to #1169

Description of your changes:
Installs kubeflow-gateway, graphana-vs, and default ClusterRbacConfig into namespace istio-system instead of namespace kubeflow.

According to the article RBAC - istio v1.3:

The ClusterRbaConfig Custom Resource is a singleton where only one ClusterRbaConfig should be created globally in the mesh and the namespace should be the same to other Istio components, which usually is istio-system.

Checklist:

  • Unit tests have been rebuilt:
    1. cd manifests/tests
    2. make generate-changed-only
    3. make test

@kubeflow-bot
Copy link
Contributor

This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Hi @shawnzhu. Thanks for your PR.

I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Bobgy
Copy link
Contributor

Bobgy commented May 29, 2020

/assign @jlewi

@jlewi
Copy link
Contributor

jlewi commented May 29, 2020

Thanks for doing this!
/ok-to-test

This looks good to me.

@yanniszark @krishnadurai @animeshsingh Any concerns/suggestions on how we should merge this to avoid wide disruptions?

My inclination is to just merge this, wait for it to be picked up by downstream systems/tests, surface any bugs and then rollback or forward fix as appropriate.

On GCP for example once its merged it will be picked up by the auto-deployments:
https://kf-ci-v1.endpoints.kubeflow-ci.cloud.goog/auto_deploy/

That could potentially surface integration issues not captured by the presubmits. We can then rollback or forward fix depending on which is the most appropriate.

Copy link
Contributor

@krishnadurai krishnadurai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shawnzhu thanks for taking this effort.

@jlewi in a bid to push this forward, here are the places I know where this change would affect:

  1. Notebook controller makes a virtual service with the gateway. This change is relatively harmless since the gateway is acquired from the ENV var, though I suggest that we change the default value here:

https://github.com/kubeflow/kubeflow/blob/9e8d095fd138f2ed6e37cd459d173dae9d895b51/components/notebook-controller/controllers/notebook_controller.go#L396-L398

  1. KFServing has a couple of configurations with gateway references:

https://github.com/kubeflow/manifests/blob/master/kfserving/kfserving-install/base/config-map.yaml#L25

and

https://github.com/kubeflow/manifests/blob/master/tests/tests/legacy_kustomizations/knative-install/test_data/expected/~g_v1_configmap_config-istio.yaml#L3

@shawnzhu
Copy link
Member Author

shawnzhu commented May 29, 2020

@krishnadurai Thanks for the pointers.

I can take care of 2) while spend more time on 3) on why it doesn't fail tests

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign jlewi
You can assign the PR to them by writing /assign @jlewi in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@shawnzhu
Copy link
Member Author

@krishnadurai per 2) from #1211 (review), I made 0982840 and 420c408

@jlewi
Copy link
Contributor

jlewi commented Jun 1, 2020

@krishnadurai @shawnzhu @nrchakradhar Should we be creating our own gateway ?

If IIUC the ISTIO gateway is what's configuring the set of envoy pods that act as an ingress gateway to the cluster. So there should be a 1:1 ratio of ingress gateways to gateway pods.

I believe the ISTIO config includes a set of pods comprising the ingress. Does it also include a gateway?

@animeshsingh
Copy link
Contributor

Thanks @shawnzhu for this PR. The recent kfserving and knative kustomizev3 manifests PR was done by your fellow IBMer @adrian555 - would be good for you folks to syncup once on this.

The biggest consequence of any gateway related PRs falls on KFServing, given the KNative/Istio being part of the core. Given @yuzisun is currently most impacted by any of these, he would be the best person to triage any Istio related PRs from KFServing side.

@jlewi
Copy link
Contributor

jlewi commented Jun 2, 2020

It looks to me like when you install ISTIO it will include a Gateway resource to describe the loadbalancer (envoy proxies) operating at the edge of the mesh.

So I'm not sure we should be creating a "kubeflow-gateway" resource. Here's what I think we are aiming to do on GCP

  1. On GCP we will install the recommended version/configs of istio

    • It turns out this is roughly the equivalent of running istioctl manifest generate to create the manifests
  2. We will customize those configs as necessary

    • The configs include pods, services, and gateway resources corresponding to the ingress gateway

    • This creates a Gateway resource named ingressgateway in istio-system namespace

      • So we won't create a new ingress gateway we will just use it
  3. We will need to make the gateway programmable in our virtual services so we can use gateway ingressgateway instead of kubeflow-gateway

    • My suggestion would be to use a kpt setter for this.
    • This will make it easy to preserve the existing behavior while making it easy for folks to update
      all the virtual services

@shawnzhu my suggestion would be to use this PR to make the gateway in our virtual services settable with a kpt setter. We can then keep the default value of "kubeflow-gateway" so that we don't break anything.

/cc @yuzisun

@k8s-ci-robot k8s-ci-robot requested a review from yuzisun June 2, 2020 01:15
@krishnadurai
Copy link
Contributor

krishnadurai commented Jun 2, 2020

It looks to me like when you install ISTIO it will include a Gateway resource to describe the loadbalancer (envoy proxies) operating at the edge of the mesh.

So I'm not sure we should be creating a "kubeflow-gateway" resource. Here's what I think we are aiming to do on GCP

And

@krishnadurai @shawnzhu @nrchakradhar Should we be creating our own gateway ?

If IIUC the ISTIO gateway is what's configuring the set of envoy pods that act as an ingress gateway to the cluster. So there should be a 1:1 ratio of ingress gateways to gateway pods.

I believe the ISTIO config includes a set of pods comprising the ingress. Does it also include a gateway?

Istio comes with a default IngressGateway (istio-ingressgateway) and pods which handle the requests - acting like a load balancer, though it does not come with a default Gateway resource with the default istio charts. Perhaps it does for GCP's install.
Gateway resources act as a configuration for IngressGateways.

If we need it to be an additional install for non-GCP setups, we could add the resource separately in KfDef specs.

How does this sound?
@jlewi

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 3, 2020

Istio comes with a default IngressGateway (istio-ingressgateway) and pods which handle the requests - acting like a load balancer, though it does not come with a default Gateway resource with the default istio charts.

It takes me a while to figure this out and this is what I found:

Problem 1

  1. when using istioctl manifest generate, it does create a Gateway resource named istio-ingressgateway in namespace istio-system along with istio v1.4
  2. when installing istio via enabling the k8s cluster add-on istio from IBM cloud, it does create a Gateway resource named ingressgateway in istio-system namespace, (along with istio v1.4) which matches that from GCP.

However, it doesn't create any Gateway resource when installing istio v1.3.1 or v1.1.6 from this repo, which is inconsistent with istioctl manifest generate. The Gateway resource kubeflow-gateway is installed when installing clusterRbacConfig.

Original problem

According to #1169, the two risks (IMHO) are:

  1. when using kubeflow-gateway to route HTTPS traffic, the cert has to be in namespace istio-system along with the deployment. Which might work but violates the namespace isolation.
  2. based on what's described in Selector of Gateway custom resource is not bound by namespace istio/istio#19970 where the flag PILOT_SCOPE_GATEWAY_TO_NAMESPACE is NOT set to true by default. If it's set to be true by default in istio v1.8, it would break a kubeflow installed with istio v1.8 (haven't seen yet).

Proposal

  1. refactor this PR to make the gateway in our virtual services settable with a kpt setter while keep installing the Gateway kubeflow-gateway. (suggestion from @jlewi)
  2. In order to fix kubeflow istio gateway should be in istio-namespace #1169 for kubeflow deployment in GCP/IBM-cloud/minikube or any other providers, it would be better be consistent with istioctl manifest generate, which means it should add an additional install for Gateway ingressgateway (idea from @krishnadurai) - a new PR would be ideal, so that it leaves space for different setup to choose between Gateway ingressgateway or kubeflow-gateway.

@jlewi
Copy link
Contributor

jlewi commented Jun 3, 2020

@shawnzhu Thanks for the deep research!

Your proposal seems good to me.

It might also be helpful in follow on PRs to think about how we organize the ISTIO manifests. I think part of the confusion is that the directory
https://github.com/kubeflow/manifests/tree/master/istio
is mixing 3 types of resources

  1. Resources to install ISTIO
  2. Platform specific resources (e.g. GCP iap ingress)
  3. Reusable KF istio resources

It might be better to separate these and put platform specific resources under platform specific directories e.g. move all iap resources to gcp directory. Similarly, it looks like resources for configuring dex with ISTIO should maybe move to their own directories.

Not to further confuse things an alternative to using a kpt setter to make the gateway in all virtual services configurable would be to use a kpt/kustomize fn to transform all of the configs.

A kpt fn basically takes YAML in and emits YAML out. It would look for all the virtual services and then apply some logic to change them.

The main advantages of this are

  1. Is that it doesn't require modifying the source manifests
  2. Is more flexible then what can be done using kustomize

On GCP we will likely end up defining a kpt/kustomize fn in order to apply the changes we need to the manifests generated by istioctl. Conceptually the process looks like the following

istioctl manifest generate | kpt fn run

kpt fn's can be configured with a CR like object. So we will define an appropriate CR in order to apply all the transformations we need ontop of the istioctl generated manifests e.g.

@jlewi
Copy link
Contributor

jlewi commented Jun 3, 2020

kubeflow/kfctl#345 is an example of a kpt fn to transform all the images.

@yanniszark
Copy link
Contributor

The kpt setter sounds like an interesting proposal. How would we handle VirtualServices that are created at runtime in that case? (e.g., VirtualServices created by the Notebook Controller).

In addition, I would like to add an argument for using our own Gateway CR (kubeflow-gateway). By saying Gateway CR, I mean only the Istio Gateway object, not the deployment actually handling the traffic (that is only one).
The Gateway CR lets you group VirtualServices and apply some configuration once to the Gateway instead of having to apply it to each VirtualService.

As an example use-case, let's say a user wants to serve kubeflow under host kubeflow.example.org.
With custom Gateway CR kubeflow-gateway: just change the Gateway to only accept traffic from host kubeflow.example.org.
With shared default Gateway CR: change every one of Kubeflow's VirtualServices (manifests + created at runtime) to only accept traffic from host kubeflow.example.org.

@jlewi
Copy link
Contributor

jlewi commented Jun 4, 2020

The kpt setter sounds like an interesting proposal. How would we handle VirtualServices that are created at runtime in that case? (e.g., VirtualServices created by the Notebook Controller).

Any application dynamically creating virtual services would need to be configurable (e.g. command line arguments) to allow the VS to be customized.

In addition, I would like to add an argument for using our own Gateway CR (kubeflow-gateway)

My understanding is that a Gateway CR is controlling the config applied to the envoy pods providing the ingress gateway.

I don't think you can have two Gateway's mapping to the same set of pods. I think this would likely produce problems because you have two different controllers battling for the same set of pods.

My suspicion is that the "kubeflow-gateway" is currently not having any effect because its selector is not matching any pods since the pods are in a different namespace.
https://istio.io/docs/reference/config/networking/gateway/#Gateway

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 7, 2020

This is my fun experience with kpt setter:

Following the official guide Multi-user, auth-enabled Kubeflow with kfctl_istio_dex by using kfctl_istio_dex.yaml

$ kfctl build -f kfctl_istio_dex.v1.0.2.yaml
$ kpt cfg grep "kind=VirtualService" kustomize/ | kpt cfg count
VirtualService: 14
$ echo "kind: Kptfile" > kustomize/Kptfile
$ kpt cfg create-setter kustomize/ istio-gateway-auth "kubeflow/kubeflow-gateway" --type array --field "spec.gateways" --set-by "@shawnzhu"
# manually apply workaround for bug GoogleContainerTools/kpt#698
$ kpt cfg list-setters kustomize/
         NAME                         VALUE                   SET BY              DESCRIPTION             COUNT  
  istio-gateway-auth   [kubeflow/kubeflow-gateway]   @shawnzhu                                                2      

Assuming altering kubeflow/kubeflow-gateway with istio-system/istio-ingressgateway:

$ kpt cfg set kustomize/ istio-gateway-auth "istio-system/istio-ingressgateway" --description "switch to existing istio gateway" --set-by "@shawnzhu"
$ kpt cfg grep "kind=VirtualService" kustomize/ \
   | kpt cfg grep "metadata.name=dex" \
   | kpt cfg tree --field "spec.gateways"
.
├── dex/overlays/github
│   └── [virtual-service.yaml]  VirtualService dex
│       └── spec.gateways: ["istio-system/istio-ingressgateway"]
└── dex/overlays/istio
    └── [virtual-service.yaml]  VirtualService dex
        └── spec.gateways: ["istio-system/istio-ingressgateway"]

I repeated above step for other 10 virtual services pointed to kubeflow-gateway and it worked well after manually work around kptdev/kpt#698

So the 1) of proposal #1211 (comment) can be achieved via kpt setter only w/o code change to this repo. (I also tried the approaches via kpt fn and it's really powerful but requires coding a function, could be a feature to kfctl if having a well-defined use case).

Any application dynamically creating virtual services would need to be configurable (e.g. command line arguments) to allow the VS to be customized.

That would be a separate issue as enhancement to fix in order to fix #1169

@jlewi
Copy link
Contributor

jlewi commented Jun 8, 2020

@shawnzhu Thats great. So do you want to check in the results of create-setter so that users don't have to create the setters themselves?

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 8, 2020

So do you want to check in the results of create-setter so that users don't have to create the setters themselves?

Yes to those inline fields references like # {"$ref": "#/definitions/..."}. But I'm curious how to generate a Kptfile with pre-defined setters under the directory kustomize after running kfctl build -f .... (assuming using this directory as a single package instead of creating several Kptfile under each component).

@jlewi
Copy link
Contributor

jlewi commented Jun 8, 2020

But I'm curious how to generate a Kptfile with pre-defined setters under the directory kustomize after running kfctl build -f .... (assuming using this directory as a single package instead of creating several Kptfile under each component).

I don't think we have a solution yet for integration. One solution would be to use stacks.
https://github.com/kubeflow/manifests/tree/master/stacks

The stacks are meant to be top-level kustomize packages that combine together several different kustomize packages. So one option would be to put the KptFile there.

@yanniszark
Copy link
Contributor

@jlewi just a few thoughts on setter vs function.
In my mind:

  • Setters are for things that are more package-specific. For example, a replica setter for a specific Deployment inside the package.
  • Functions are for things that are more general (and also provide generation ofc), for example change the namespace in all resources, or change the Gateway in all VirtualServices.

The big differentiation for the manifest developer, as I understand it, is that functions are reusable, while setters are not.
For example, if another kustomization/overlay is added with an additional VirtualService, yet another setter must be created for that package. The end-user has to track all the different setters from all packages and set them correctly. In contrast, the function works the same way everywhere, doesn't depend on the manifest developer and is reusable across packages.

This is why I think it's a good fit for a function. I'm not saying we should throw what we have, but what do you think is the best way eventually? Do you agree with this reasoning?

@jlewi
Copy link
Contributor

jlewi commented Jun 9, 2020

@yanniszark I think what you highlight is accurate. Some additional things to consider

  • kpt functions run inside docker images which creates some friction if you want to run as part of a CI/CD system running in pods
    • There are ways to work around this but its friction

@jlewi
Copy link
Contributor

jlewi commented Jun 15, 2020

@shawnzhu ping?

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 15, 2020

@jlewi I've discussed the current situation with @Tomcli and @adrian555 especially possible break change on kfserving, so here's what I learned:

  1. kfserving will function with code change in this PR where its ingressGateway has been changed to kubeflow/kubeflow-gateway. see
    "ingressGateway" : "kubeflow-gateway.kubeflow",
    but would need a separate kpt setter besides that from Installs kubeflow-gateway in ISTIO namespace #1211 (comment). no idea if kpt setter works with multi-line config like ☝️
  2. I can try to to deliver a Kptfile via the path of https://github.com/kubeflow/manifests/tree/master/stacks but the whole solution via kpt depends on a future fix of create-setter ignores array type in Kptfile kptdev/kpt#698 or it will need to manually update the fields references in anyway.

So dues to the complexity of delivering proposed changes in this PR via kpt, I'm thinking if it's acceptable by community to merge this PR as a break change? which will make it much simpler to enable HTTPS by just editing kubeflow-gateway in ns istio-system without duplicating certs among namespaces via copying k8s secrets.

the kpt setter path is still a nice thing to have when dealing with future use case like switching to istio-ingressgateway from managed istio or istioctl.

@jlewi
Copy link
Contributor

jlewi commented Jun 15, 2020

I'm thinking if it's acceptable by community to merge this PR as a break change?

It looks like this PR is moving the "kubeflow-gateway" into the ISTIO namespace per the PR comment.

Per my comments above. I don't think we want to create a "kubeflow-gateway" at all; we should just be using the gateway that ISTIO creates for its edge loadbalancers. So why bother moving "kubeflow-gateway" to the "istio-namespace"? What's the point of upgrading applications to use "kubeflow-gateway" in "istio-namespace"? Why not just update applications to use the default gateway in the "istio-namespace"

So instead of moving "kubeflow-gateway" to "ISTIO" namespace how about using this PR to enable platforms to start using the ISTIO gateway. Specifically

  1. Revert changes to kubeflow-gateway
  2. Add setters where possible (e.g. virtual services to set the gateway)
    • So it won't do KFServing

So now platforms that want to use the ISTIO gateway as opposed to the incorrect "kubeflow-gateway" can do so and take advantage of the kpt setters to set things correctly

We can then figure out an appropriate solution for KFServing; e.g. create an overlay with the appropriate config that can be used when people want to use the ISTIO gateway.

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 16, 2020

So why bother moving "kubeflow-gateway" to the "istio-namespace"?

Take my last week's experience on securing kubeflow with HTTPS, it needs to copy cert and keys from other namespace to ns kubeflow where kubeflow-gatway is. It will make sense to keep it in istio-system when istio is the ingress provider, this is based on the assumption that it keeps a specific gateway for kubeflow.

Why not just update applications to use the default gateway in the "istio-namespace"

This is my proposal 2 which I planned to do in another PR. but if there's no need to keep a kubeflow-gateway for kubeflow (guess this is the original purpose of #1211), I can just go ahead to keep using ONE default gateway in the "istio-namespace" in this PR.

To be specific it will include:

  1. rename kubeflow-gateway to istio-ingressgateway (consistent with result of istioctl manifest generate), keep it in the "istio-namespace"
  2. updates existing virtualservices and kfserving to point to this default gateway
  3. kpt setters (field references and Kptfile) for users who need to update gateway name.

We can then figure out an appropriate solution for KFServing; e.g. create an overlay with the appropriate config that can be used when people want to use the ISTIO gateway.

I can work with @adrian555 to see if it's possible to figure out a kpt setter friendly configmap for kfserving.

@shawnzhu shawnzhu force-pushed the issue-1169 branch 2 times, most recently from 3309e84 to efc09c0 Compare June 16, 2020 01:24
@shawnzhu
Copy link
Member Author

commented in #1239 (comment) about failed test cases

@jlewi
Copy link
Contributor

jlewi commented Jun 16, 2020

Thanks for the explanation @shawnzhu

Regarding your plan

  1. rename kubeflow-gateway to istio-ingressgateway (consistent with result of istioctl manifest generate), keep it in the "istio-namespace"
  2. updates existing virtualservices and kfserving to point to this default gateway
  3. kpt setters (field references and Kptfile) for users who need to update gateway name.

Can we split this into two PRs; Do 2 &3 in one PR (this one) and do 1. in a separate PR.

Here's why

  • Adding kpt setters to our existing virtual services should be a non breaking change
  • Doing this is a pre-requisite to making it easier for platform OWNERs to configure ISTIO services for how they configure ISTIO

rename kubeflow-gateway to istio-ingressgateway (consistent with result of istioctl manifest generate), keep it in the "istio-namespace"

I think ISTIO configuration is probably platform specific. So should we leave it up to the platform owners to properly configure ISTIO and then use the kpt setters to use whatever gateway they have configured?

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 18, 2020

I've added kpt setters with field references and a Kptfile under stacks/generic for a single setter kubeflow-gateway, here's where I am:

  • When using kfctl build -f {kfdef_config.yaml}, it requires copying the Kptfile under kustomize folder, then it can switch this gatway like:
$ kpt cfg list-setters kustomize/
        NAME                    VALUE              SET BY   DESCRIPTION   COUNT  
  kubeflow-gateway   [kubeflow/kubeflow-gateway]                          12
$ kpt cfg set kustomize/ kubeflow-gateway "istio-system/istio-ingressgateway" --description "switch to existing istio gateway"
$ kpt cfg list-setters kustomize/
        NAME                    VALUE              SET BY   DESCRIPTION   COUNT  
  kubeflow-gateway   [istio-system/istio-ingressgateway]     switch to existing istio gateway         12
  • When using kustomize under stack generic with a Kptfile, it doesn't work since all field references are gone from the generated YAML:
$ cd stacks/generic
$ kustomize build . --load_restrictor LoadRestrictionsNone -o generic.yaml
$ kpt cfg list-setters .
        NAME                    VALUE              SET BY   DESCRIPTION   COUNT  
  kubeflow-gateway   [kubeflow/kubeflow-gateway]                          0

Comments?

@jlewi
Copy link
Contributor

jlewi commented Jun 18, 2020

@shawnzhu Can we avoid using a KptFile.

What about using the old style setters as a way of avoiding using kpt files

metadata:
  name: name-vm # {"type":"string","x-kustomize":{"setBy":"kpt","partialSetters":[{"name":"name","value":"name"}]}}

Should we reconsider just writing a kpt function like
https://github.com/kubeflow/kfctl/blob/master/kustomize-fns/image-prefix/function.go

Since its just go code we could make it invokable from kfctl e.g.

kfctl apply -f istio_transforms.yaml

WDYT?

kubeflow/kfctl#332 is tracking the fact that the way kfctl works doesn't preserve comments. I don't think there's any easy fix.

@shawnzhu
Copy link
Member Author

shawnzhu commented Jun 19, 2020

What about using the old style setters as a way of avoiding using kpt files

I've tried the old style setters (actually used by kustomize cfg/kubectl-krm cfg) but both kpt cfg list-setters or kubectl-krm cfg list-setters requires an OpenAPI file (either Kptfile or Krmfile). guess because they share the same code base:

$ kubectl-krm cfg list-setters kustomize/
Error: open kustomize/Krmfile: no such file or directory
Usage:
  kubectl-krm cfg list-setters DIR [NAME] [flags]
...
$ kpt cfg list-setters kustomize/
Error: open kustomize/Kptfile: no such file or directory

So for the path of using kpt setter, documentation (as optional practice) is a better option (basically creating openapi file, setters and set value).

Should we reconsider just writing a kpt function

it's definitely more attractive/flexible compare to any other options, I will educate myself first.

@animeshsingh
Copy link
Contributor

Can we hold this and bring it after KF 1.1 is rc is out? Something like this would impact quite a few things, specifically KFServing, and if not necessary, best to create a small design doc, bring it in community meeting and then we move forward. Definitely not recommended before 1.1

@jlewi
Copy link
Contributor

jlewi commented Jun 19, 2020

@animeshsingh I think you make a good point.

@shawnzhu
Copy link
Member Author

Can we hold this and bring it after KF 1.1 is rc is out?

Yes.

and if not necessary, best to create a small design doc, bring it in community meeting and then we move forward.

I will try!

@stale
Copy link

stale bot commented Sep 18, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in one week if no further activity occurs. Thank you for your contributions.

@shawnzhu shawnzhu closed this Sep 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants