-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine why we can't have 2 third party resources #17
Comments
Also, this fix looks to be scheduled for inclusion in v1.4, although it's labelled cherry-pick. Not sure what that specifically means |
Actually, I take my "Not sure what that specifically means" comment back. According to this comment, it looks like a cherry pick has at least been prepared for the 1.3 tree |
I've built k8s from the source at the HEAD of master today and started a cluster from that. Using that on both the server side and client side (kubernetes/kubernetes#24392 is a client issue), the issue described here is resolved. That being said, there is an issue where deleting a 3pr kind and re-adding it in a different API group will expose the fact that the original wasn't really fully deleted, which requires you to further qualify your queries. e.g.: So... as long as everything in master makes it into k8s 1.4, I think that will get us past the problem described in the OP as long as we're willing to specify that Steward require k8s 1.4+. In the meantime, @arschles's plan to use ConfigMaps instead will work great. Having the handle we do on this now, I see no reason not to close this. |
Good news! Steward requiring 1.4+ is fine by me from a timeline perspective. What can we do to facilitate dev/test on kubernetes HEAD in the interim? It would behoove us to move to 3PR sooner rather than later. Can we use nightlies + nanokube/microkube/kube-aws/kops/omgwtfbbq, and drop our ConfigMap work around? |
Strike micro-kube from the list. I ended active development on that side project after minikube started to mature. I can look into options for what you have suggested, but I'm not yet sure which, if any, of the options you mentioned offer the flexibility to start a cluster using anything other than an officially released, semantically numbered version of k8s. We could |
This is very much up for discussion. The longer we wait for a potentially fixed build to trickle into the zero-to-kubernetes tools the longer it will take us to validate the 3PR approach. Given the recent state of 3PR, I'm worried that there are other undiscovered bugs lying under the surface and finding those sooner rather than later will serve us well. |
@slack @krancour I completely agree that we should mitigate the risk of waiting for 3PRs to be fully functional. In my opinion, however, the major risk here lies in building, running and proving the control loop that handles and acts on service plan claims. See DATA_STRUCTURES.md for the latest on service plan claims. Current Development StatusBefore I elaborate on my preferred risk-mitigation strategy, let me first highlight some status on our development. These statuses will be relevant to my risk mitigation argument. See below.
Mitigating RiskPR #72 is the major risk-mitigating factor for third party resources. In that patch, the control loop is completely implemented, and consumers can interact with steward in semantically the exact same way as originally intended. The instrument by which they interact is different, though. Instead of consumers using a This difference in machinery is also addressed in #72, however. The control loop is written in a storage-pluggable fashion, so we can quickly and easily swap in a 3PR-based implementation. The result is that we will have proven everything in the system except for the use of the Next StepsI believe that putting our resources into fixing 3PRs upstream will significantly complicate and slow development of steward. Further, given the state of our competition in the CNCF and Kubernetes proper I think we should continue to aggressively make progress toward release instead of toward using a third party resource for service plan claims. However, if fixing third party resources upstream has strategic value that I'm not aware of, then my previous assertion may be invalid. In summary, we have steward using a |
Point well taken, and I appreciate the idea behind release sooner rather than later. 💃 Let's focus on rounding out implementations of the three Steward modes! We will need to make it clear in the Steward roadmap that 3PR is where we want to end up. We can decide if that should be a requirement for a stable release later. It would be great if we could avoid an annoying migration right out of the gate. Lastly, we'll need to keep our ear to the rail re 3PR so we know when they are "good enough" for us to switch our implementation. |
If you create two 3rd party resources in a row:
... and then inspect them:
They both appear to be present. But, when you try to get them, the second one created appears not to be there:
I'm not yet sure if this is a bug with my manifests, kubernetes, or something else, but it is important to solve before beginning on the state machine work laid out in #18, since that will require us to define and use
ServicePlanClaim
resourcesThe text was updated successfully, but these errors were encountered: