Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-1.x] Patch namespace secret trigger an error due to unsupported media type #893

Closed
JBWatenbergScality opened this issue Oct 18, 2022 · 27 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@JBWatenbergScality
Copy link

Describe the bug

When trying to use patchNamespacedSecret method on release-1.x branch the following error is raised:

Error: None of the given media types are supported: application/json-patch+json, application/merge-patch+json, application/strategic-merge-patch+json, application/apply-patch+yaml
    at ObjectSerializer.getPreferredMediaType (ObjectSerializer.js:1760:1)
    at CoreV1ApiRequestFactory.patchNamespacedSecret (CoreV1Api.js:8653:1)
    at ObservableCoreV1Api.patchNamespacedSecret (ObservableAPI.js:9211:1)
    at ObjectCoreV1Api.patchNamespacedSecret (ObjectParamAPI.js:2661:1)
   //...

** Client Version **
release-1.x

** Server Version **
1.24.3

To Reproduce
Sample code triggering the error:

const coreV1 = new CoreV1Api(config);
coreV1.patchNamespacedSecret(
        {
          namespace: "default",
          name: "my-secret",
          body: {"tls.crt": "newValue"},
        }
      )

Also kind of related to this issue, I think we are missing a way to instruct patchNamespacedSecret which patch strategy we expect it to use. For example in the above example I expect it to use application/merge-patch+json but have no way right now to provide the patch strategy.

@brendandburns
Copy link
Contributor

cc @davidgamero

@davidgamero
Copy link
Contributor

@JBWatenbergScality thanks for letting us know, did you have this working on the 0.x version prior?
I think you are correct that it would require a custom Content-Type header, let me check if we have a supported pattern for this.

@davidgamero
Copy link
Contributor

From a quick look through it seems like some of the media-types are missing in this list, but you may be able to supply one through the options parameter or using the customObjectsAPII

@vpeltola
Copy link

I had a similar problem with patching cronjobs and I was able to add headers via the options argument and get it working. For example:

      const result = await batchV1Api.patchNamespacedCronJob(
      name, ns, body, undefined, undefined, undefined, undefined, undefined,
      {
        headers: {
          'Content-Type': 'application/merge-patch+json'
        }
      });

@clintonmedbery
Copy link
Contributor

I am currently working on fixing these outdated examples and getting the same. Notably on readNamespacedDeployment and createNamespace. I might go ahead and push up what I got so we have some easy to recreate examples.

@davidgamero
Copy link
Contributor

davidgamero commented May 4, 2023

I'm taking a look at PatchNamespacedPod, and ran into an issue where supplying a new middleware wipes the configuration instead of merging it, resulting in an empty base server URL, but i've been trying out something like this:

const headerPatchMiddleware = new PromiseMiddlewareWrapper({
        pre: async (requestContext: RequestContext) => {
            requestContext.setHeaderParam("Content-type", PatchUtils.PATCH_FORMAT_JSON_PATCH);
            return requestContext;
        },
        post: async (responseContext: ResponseContext) => responseContext
    })

and then passing that in the options.middleware array

@clintonmedbery
Copy link
Contributor

@davidgamero is there a reason we are still generating the client with TYPESCRIPT="${GEN_ROOT}/openapi/typescript.sh" instead of TYPESCRIPT="${GEN_ROOT}/openapi/typescript-fetch.sh"? Would this have anything to do with this issue? We have an item in FETCH_MIGRATION to Switch generate-client script to use typescript-fetch but I don't see where we have done that.

@davidgamero
Copy link
Contributor

@clintonmedbery good point that item should be updated for clarity- the generator we are using is the new typescript generator that is still in development with node-fetch selected as the framework config option
it's an attempt to merge the various typescript client generators, which could hopefully one day give us an easy upgrade path to native fetch and potentially supporting other frameworks
we should probably update that in the main thread for clarity since right now it's only in the migration doc

i see 2 related issues here: we don't have a clear pattern or example for providing a custom patch strategy (this should be doable with the current requestContext middleware), and that some generated code has missing media types. Both appear to come from a disconnect in how the typescript generator creates the allowlist of media types and the swagger spec

@dleehr
Copy link

dleehr commented Aug 23, 2023

I was able to get patch operations working in 1.0.0-rc3 with a minor edit to the generated ObjectSerializer.js. The code that needs to be changed is in https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/main/resources/typescript/model/ObjectSerializer.mustache, so I've opened a PR on that project to support the additional media types: OpenAPITools/openapi-generator#16386

Edit: I did try to fix this with middleware but debugging the operation proved that the operation was failing before it even got to my middleware. Issue is that ObjectSerializer doesn't support any of the the candidate media types for patching.

macjohnny pushed a commit to OpenAPITools/openapi-generator that referenced this issue Aug 24, 2023
* Add supportedMediaTypes needed for kubernetes client

kubernetes-client/javascript#893

* Add generated files
@mstruebing
Copy link
Member

@dleehr would your middleware solution work now if we would use the updated code from the openapi-generator which was merged recently?

@dleehr
Copy link

dleehr commented Aug 24, 2023

@mstruebing The middleware change is actually unnecessary. The client is already determining what the Content-Type should be for the request (an enhancement over 0.x), but the generated ObjectSerializer was blocking that. Updating the generated ObjectSerializer code was the only fix needed for 1.0.0-rc3. So I think regenerating with openapi-generator will fix it here.

I hand-patched node_modules/@kubernetes/client-node/dist/gen/models/ObjectSerializer.js after installing 1.0.0-rc3 and now I'm able to successfully patchNamespacedSecret, patchNamespacedServiceAccount, etc.

@davidgamero
Copy link
Contributor

I believe in the past we supplied a custom ObjectSerializer, but I agree that it's cleaner to just contribute to the generator instead.

Looking forward to the next rc since this should unblock a ton of routes!

@guomeng306
Copy link

@dleehr
Thanks so much for your great job.
The fix for
Error: None of the given media types are supported: application/json-patch+json, application/merge-patch+json, application/strategic-merge-patch+json, application/apply-patch+yaml
on
patchNamespacedSecret
has been in master.

Is there ETA for the available of 1.0.0-rc4 with this fix?

Thansk a lot.

@guomeng306
Copy link

By the way, is there any workaournd for this error on patchNamespacedSecret ?

@davidgamero
Copy link
Contributor

@brendandburns it'd be great if we can get a new rc when you have time please?

@davidgamero
Copy link
Contributor

@guomeng306 unfortunately there isnt a workaround in the current rc afaik without editing the generated object serializer to include the needed media types like @dleehr did. Im happy to point you to those changes,but we should have a new rc for you soon with the fix!

@guomeng306
Copy link

@davidgamero understood. thanks so much. By the way, is there ETA for the available of 1.0.0-rc4 with this fix? e.g. possible in next two weeks? :)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2024
@jeromy-cannon
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2024
@jeromy-cannon
Copy link

is there a workaround for this, yet? I ran into this issue with v0.20.0.

@davidgamero
Copy link
Contributor

@jeromy-cannon this issue is from the 1.x releases, which included a workaround fix in 1.0.0-rc4

if you are experiencing this on a v0.x.x release, we may want to open a separate issue for tracking it as the 0.x and 1.x working branches are separate.

iirc one 0.x release approach to this issue was the KubernetesObjectAPI which doesn't include all the same functionality yet on the 1.x branch, but should allow handling patch media types on 0.x releases

also, it would help for repro if you could include the exact media type and what method call you are expecting to use- is it application/merge-patch+json as mentioned in the start of the thread?

@jeromy-cannon
Copy link

jeromy-cannon commented Feb 21, 2024

@davidgamero , thank you for the response. Great news on the workaround fix in 1.0.0-rc4. As a workaround I just did a get/delete/(update the payload from the get)/insert. Thank you for letting me know about the KubernetesObjectAPI, I might need to use that strategy in the future.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2024
@rossanthony
Copy link

rossanthony commented May 22, 2024

I've run into this same issue and resolved it for now by using 1.0.0-rc4 - but I'm curious if anyone knows what the status of this release candidate is? How close is it to being cut as the official v1?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests