Skip to content

Conversation

@markdroth
Copy link
Contributor

@markdroth markdroth commented Mar 14, 2025

Commit Message: xDS: ext_proc: add GRPC body send mode
Additional Description: Adds a new body send mode for gRPC traffic. Also adds a safe way for the ext_proc server to return OK status without losing data in FULL_DUPLEX_STREAMED and GRPC modes. See grpc/proposal#484 for context.
Risk Level: Low
Testing: N/A
Docs Changes: Included in PR
Release Notes: N/A
Platform Specific Features: N/A

Signed-off-by: Mark D. Roth <roth@google.com>
@repokitteh-read-only
Copy link

As a reminder, PRs marked as draft will not be automatically assigned reviewers,
or be handled by maintainer-oncall triage.

Please mark your PR as ready when you want it to be reviewed!

🐱

Caused by: #38753 was opened by markdroth.

see: more, trace.

@repokitteh-read-only
Copy link

CC @envoyproxy/api-shepherds: Your approval is needed for changes made to (api/envoy/|docs/root/api-docs/).
envoyproxy/api-shepherds assignee is @abeyad
CC @envoyproxy/api-watchers: FYI only for changes made to (api/envoy/|docs/root/api-docs/).

🐱

Caused by: #38753 was opened by markdroth.

see: more, trace.

@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Apr 13, 2025
@markdroth markdroth added no stalebot Disables stalebot from closing an issue and removed stale stalebot believes this issue/PR has not been touched recently labels Apr 17, 2025
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
@stevenzzzz
Copy link
Contributor

stevenzzzz commented Sep 10, 2025

Could you clarify what's the gap this PR is trying to cover?
IIUC when HTTP body-chunks gets accepted by Envoy, it sends as is to ext_proc server. If the downstream is a grpc server, grpc messages follow grpc framing is sent via http body, when they arrive at the Envoy, the http data chunk might not align with message boundary already, but if Envoy send it as is (STREAMED, BUFFERED), the ext-proc server should be able to understand and unpack the packets following grpc framing.

I don't quite follow why the unpacking&repacking at Envoy is necessary.

@stevenzzzz
Copy link
Contributor

@adisuissa @yanjunxiang-google FYI

@stevenzzzz
Copy link
Contributor

/assign @yanjunxiang-google

@markdroth
Copy link
Contributor Author

@stevenzzzz For an explanation of why the GRPC processing mode makes sense, see my pending gRFC for ext_proc support in gRPC:

https://github.com/grpc/proposal/blob/634cfec6f18bf12d09948faa94b69b2b21d9c259/A93-xds-ext-proc.md#payload-handling

Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
…ut_waiting_for_header_response does not work for GRPC

Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
…s not support mode override

Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
Signed-off-by: Mark D. Roth <roth@google.com>
@yanavlasov
Copy link
Contributor

yanavlasov commented Oct 20, 2025

Consider the following xDS filter chain:

  1. gRPC deframing filter
  2. ext_proc filter
  3. gRPC re-framing filter

If gRPC treats (1) and (3) as no-ops, then the above xDS configuration will work the same on both gRPC and on Envoy: both data planes will use the deframed gRPC messages in the ext_proc communication. So this configuration works fine.

However, now consider a filter chain that has only the ext_proc filter -- i.e., it does not have (1) or (3). In gRPC, because (1) and (3) are no-ops, the ext_proc filter will still use deframed gRPC messages in the ext_proc communication. However, because (1) and (3) are not no-ops in Envoy, Envoy would send raw HTTP/2 DATA frames in the ext_proc communication. This means that this configuration will result in different behavior in gRPC than in Envoy, which defeats the entire purpose of xDS as a common data plane API.

There are downsides into encapsulating deframing into ext_proc filter.

  1. In Envoy we plan to support other protocols in ext_proc payloads. This will requires us to keep adding deframers for these other protocols to the ext_proc filter, which is not great from the ext_proc maintenance perspective.
  2. In Envoy we plan to apply policies and business logic to the deframed payloads before they reach ext_proc (or there may be no ext_proc at all). For example we plan to apply annotations to MCP tools, RBAC, etc after MCP message was deframed. This behavior needs to be composable and we normally achieve this by building a filter chain with required behaviors.

Supporting requirement 2 and having ext_proc deframe as well, will require duplicating deframing in two filters which we should not do.

We would like to make this work in Envoy is by making the following filter chain:

  1. gRPC, MCP, etc deframing filter
  2. Protocol specific filters. For example MCP annotation filter.
  3. ext_proc with content-type configuration. If expected content-type frame is missing it will fail the request.
  4. other filters.
  5. router

We do not have specific details at this point, since we are still designing it, but we are likely going to make deframed protocol available via ambient request properties such as as filter state, that other filters, including ext_proc can use.

It is a bit of an inconvenience to have to add deframing filters in order to use the business logic for deframed protocols, but I do not think we can get around it if we want to make this business logic composable. ext_proc is just another element of this business logic and does not need to be special by having its own deframer. This is a bit of a burden for the operator to ensure the right filter is present, but is also something that will obviously be broken if the required filter was not included in the filter chain, and we have other examples where two filters have to work together to implement a specific behavior.

@markdroth
Copy link
Contributor Author

markdroth commented Oct 21, 2025

@yanavlasov Thanks, that's helpful. I understand the structure you're describing. However, we need to do this in a way that does not break the cross-data-plane nature of xDS.

From an xDS API perspective, there is a requirement that there not be an xDS configuration that results in different behavior in different data planes. It's fine for a particular data plane not to support a given feature, in which case it can NACK the xDS resource that attempts to configure that feature. But it's not okay for two data planes to accept the same config and then behave fundamentally differently. All xDS API changes must comply with that requirement.

As I said earlier, having a separate gRPC deframing filter violates that requirement. Both data planes (Envoy and gRPC) would behave the same when the ext_proc filter is in between the gRPC deframing filter and the gRPC re-framing filter: in gRPC, which always does gRPC deframing, the deframing and re-framing filters would simply be no-ops. However, if the ext_proc filter is not between the gRPC deframing filter and the gRPC re-framing filter, then the behavior will be different for Envoy than for gRPC: Envoy would send raw HTTP/2 DATA frames to the ext_proc server, whereas gRPC would continue to send deframed gRPC messages.

(I considered the idea of having gRPC NACK if the ext_proc filter is not in between the gRPC deframing and reframing filters. However, I think that is actually a non-trivial thing to detect in the general case. It's possible for the gRPC deframing or reframing filters to be disabled on a per-route basis. And it may also be very hard to tell all the possible combinations in which filters may be executed in if some of the filters are configured via the composite filter -- especially once we implement #40885.)

I don't see any feasible way of structuring the xDS configuration that meets the aforementioned requirement without making the deframing part of the ext_proc filter's configuration.

That having been said, I think there are ways that we could get the composability you seek without moving the gRPC deframing out of the ext_proc filter config. For example, consider the following approach:

  • Write a common library for splitting up a raw HTTP/2 DATA stream into individual framed gRPC messages. Each body chunk produced by this library would be a single framed gRPC message, including the gRPC frame header, so any subsequent filter can freely interpret it as a raw HTTP/2 DATA framem just like it does today. However, the library would also set some filter state that tells subsequent filters that the body chunk contains exactly one gRPC message. Filters that need to see individual deframed gRPC messages can therefore just skip the 5-byte gRPC frame header and use only the gRPC message.
  • Ths library can be used in any filter. If the library is used in more than one filter, the gRPC message splitting still happens only once: when the second filter calls the library to split up the gRPC messages, the library will see the filter state that says that they've already been split up, so it will just pass them through as-is.

The same approach could be used for JSON-RPC or any other protocol. We can have a common xDS representation of how this deframing works for a given filter:

enum BodyChunkMode {
  // Raw HTTP payload contents.
  RAW_HTTP = 0;

  // Deframed gRPC messages.
  GRPC = 1; 

  // JSON-RPC messages.
  JSON_RPC = 2;
}

Each individual filter that needs to see individual gRPC or JSON-RPC messages would have a config field of that type. So for the ext_proc filter, the RBAC filter, and any other filter that needs to see individual gRPC or JSON-RPC messages, the filter's config can directly set that field to GRPC or JSON_RPC, and the right thing will happen.

Benefits of this approach:

  • Provides the composition ability that you described.
  • Eliminates the need to add a new body send mode to ext_proc. For gRPC, we can use the existing FULL_DUPLEX_STREAMED mode.
  • Configuring BodyChunkMode to GRPC will directly be part of the ext_proc filter's config, which means that gRPC can NACK an ext_proc filter config that does not configure that value.

Thoughts?

@yanavlasov
Copy link
Contributor

I do not see how the approach you are suggesting is practical given the ecosystem of Envoy filters. This puts the burden on developers to add these libraries into all extensions that would need them. How would this work for WASM extensions, dynamic Rust modules or Lua? Would we provide libraries for these environments too? It is inferior to a generic approach and is hardly a composable solution.

Going back to the original issue of putting the payload content type into the body mode. Such approach would also prevent us from implementing trascoding in ext_proc, since it is a requirement that server and client use the same body send mode. It prevents the server from indicating that the type of content it is returning to the client had changed - for example form MCP to REST.

@yanavlasov
Copy link
Contributor

The problem you are describing with respect to correctness of the overall filter chain behavior is not new. And this is not really an xDS problem, xDS does not and can not provide constraints on the filter chain config.

The failure you are describing is inherent in the approach to use composability to achieve the desired business logic. It has its advantages and disadvantages. Missing, reordered or misconfigured filters, while individually having the right config, can still produce invalid behavior when combined. This came up before and was flagged in the most recent threat modelling and we will try to get resources to harden this area. Envoy already has some solutions to this problem, which are used in production, and they can be used to achieve desired safety in the gRPC/CSM interoperability case too.

I understand that you would like to have configuration safety guaranteed by the API itself, and that it is the ideal approach. But it is in-practical in Envoy, given its architecture. We can build this for ext_proc and gRPC deframing, but then the same principle would require us putting JWT parser and validator into RBAC, so it does not need to depend on the JWT filter to enforce policies based on JWT claims. There are many more examples like this and then you extend it to dependencies that are proprietary or built using different toolchains (WASM or Rust) it just becomes a science project.

@yanjunxiang-google
Copy link
Contributor

yanjunxiang-google commented Nov 6, 2025

@markdroth Currently, ext_proc filter grows to be a very big extension with complicated state machine and processing modes. So, there is an initiative to simplify it and moving none-core parts, like HTTP server support, observability mode support, out from it to keep it simple and easy to maintain. With that in mind, if the goal is: "The new GRPC body send mode is to handle the buffering and deframing in the data plane instead of making every ext_proc server handle it itself", can we actually have a new gRPC buffereing/framing extension to achieve this? This new filter can be configured right in front of ext_proc filter. That way, when the traffic reach ext_proc filter, it is already framed well, and ext_proc filter does not need to do anything specific.

@markdroth
Copy link
Contributor Author

I think the requirements here are as follows:

  1. The ext_proc request must contain an indication of what type of body data is being sent (e.g., raw HTTP/2 DATA frames or deframed gRPC messages), so that the ext_proc server knows how to handle the body data.
  2. Data planes must be able to NACK an xDS resource that configures them to send a type of body data that the data plane does not support (e.g., if the data plane supports sending only deframed gRPC messages, then it needs to be able to NACK a resource that configures it to send raw HTTP/2 DATA frames).

If we simply add a separate gRPC deframing filter and have the ext_proc filter be completely unaware of what type of data it's being asked to send, that would fail to meet those requirements:

  1. If the ext_proc filter is unaware of what data it's sending, then it cannot include an indication of what type of data it's sending in the ext_proc request.
  2. If the ext_proc filter config does not directly indicate what type of data it's sending and we instead rely on it being after a gRPC deframing filter to indicate that it's going to send deframed gRPC messages, then it's not feasible for the data plane to determine when it's being configured to send a type of data it does not support. (If the filter chain were always completely flat, this would be feasible, but given the existance of things like the composite filter, ECDS, and per-route filter config overrides, it's almost impossible to determine whether the ext_proc filter will be after the gRPC deframing filter at config validation time.)

I think that the only way to meet those requirements is to have something directly in the ext_proc filter config itself that indicates that it's being configured to send deframed gRPC messages.

I understand the desire for Envoy's implementation to avoid duplication and acheive better composability, but I think we need to find a way to do that that doesn't break the above requirements. I think my suggestion above (#38753 (comment)) provides a reasonable way to do this. It does mean that there would be a little bit of work needed for each filter that needs to deal with the deframed messages, but I think it should be possible to keep that to a minimum by putting the common functionality into a separate library that can be used in multiple filters.

I am open to other suggestions that meet the requirements above. But I don't think that having a separate gRPC deframing filter and having the ext_proc filter be completely unaware of what type of data it's sending actually meets those requirements.

@yanjunxiang-google
Copy link
Contributor

yanjunxiang-google commented Nov 6, 2025

Hi, @markdroth , For "1) The ext_proc request must contain an indication of what type of body data is being sent (e.g., raw HTTP/2 DATA frames or deframed gRPC messages), so that the ext_proc server knows how to handle the body data", can this be detected by the ext_proc server from looking into the request headers, like the Content-Type header?

For "2 NACK an xDS resource that configures them to send a type of body data that the data plane does not support", if Envoy ext_proc filter sends whatever data it received, this NACK a configuration is unnecessary.

@yanjunxiang-google
Copy link
Contributor

@markdroth , one question, if Envoy ext_proc filter is configured with this gRPC body mode, what happen if this Envoy receives an HTTP request? Does it encode this gRPC mode configuration and send to the ext_proc server?

@markdroth
Copy link
Contributor Author

For "1) The ext_proc request must contain an indication of what type of body data is being sent (e.g., raw HTTP/2 DATA frames or deframed gRPC messages), so that the ext_proc server knows how to handle the body data", can this be detected by the ext_proc server from looking into the request headers, like the Content-Type header?

No, it can't. As an example, the content-type will be application/grpc for all gRPC traffic, but the ext_proc filter in Envoy can be configured to send either the raw HTTP/2 DATA frames or the deframed gRPC messages to the ext_proc server.

The thing that we need to communicate to the ext_proc server is a function of the ext_proc filter configuration, not a function of the traffic seen by the ext_proc filter.

For "2 NACK an xDS resource that configures them to send a type of body data that the data plane does not support", if Envoy ext_proc filter sends whatever data it received, this NACK a configuration is unnecessary.

That's incorrect. If we had a separate gRPC deframing filter, then the behavior of the ext_proc filter would depend on whether it was after the gRPC deframing filter. Therefore, a data plane that does not support sending HTTP/2 DATA frames would need to NACK if the ext_proc filter was not after the gRPC deframing filter. And as I indicated above, that is not feasible to figure out, since there are so many layers of indirection possible in constructing the filter chain (e.g., composite filter, ECDS, and per-route config overrides).

if Envoy ext_proc filter is configured with this gRPC body mode, what happen if this Envoy receives an HTTP request? Does it encode this gRPC mode configuration and send to the ext_proc server?

If Envoy attempts to do gRPC deframing on traffic that isn't actually gRPC traffic, I would expect that deframing to fail in most cases, because the length of the body will not match the length specified by the gRPC frame header. Envoy would probably want to fail the request at that point, because it's received non-gRPC traffic in a place where it was configured to expect gRPC traffic.

That having been said, I think this is a separate question from how we express the ext_proc configuration, because regardless of whether we configure the gRPC deframing as part of the ext_proc filter or in a separate gRPC deframing filter, we'll have to address this case either way.

@yanjunxiang-google
Copy link
Contributor

"application/grpc" that's the outer gRPC message Envoy built to send the message to the ext_proc server. The original client request headers, including "Content-Type" header, are encoded in request_headers here:

HttpHeaders request_headers = 2;
.

@markdroth
Copy link
Contributor Author

All gRPC traffic has content-type application/grpc. You're right that that's true for the gRPC request that Envoy sends to the ext_proc server, but it's also true for any gRPC request that Envoy receives from its downstream client. In this context, we're talking only about the latter.

What I'm saying is that whenever Envoy receives a gRPC request from a downstream client, that request will have content-type set to application/grpc. Envoy can be configured to send the body of that request to the ext_proc server as either HTTP/2 DATA frames or as deframed gRPC messages. The choice of which behavior Envoy uses is determined by Envoy's configuration, not by the content of the request it received from the downstream client.

@yanjunxiang-google
Copy link
Contributor

yanjunxiang-google commented Nov 6, 2025

  1. So, we agreed upon that the ext_proc server can retrieve the client request Content-Type headers (

    HttpHeaders request_headers = 2;
    ) to detect whether this client request is a deframed gRPC message or not.

  2. The other requirement you want is to have Envoy send one callout message for one gRPC message(as the original HTTP2 raw data frame may either contain a small piece of it, or multiple of it, so you want Envoy to be able to buffer, or split). My suggestion is to do this outside of Envoy ext-proc filter. As you mentioned in A93: xDS ExtProc Support grpc/proposal#484, in gRPC data plane, "the framing/deframing is handled in the transport layer, and filters see only individual gRPC messages". So, my idea is to do framing/deframing in a separate Envoy extension.

@markdroth
Copy link
Contributor Author

  1. So, we agreed upon that the ext_proc server can retrieve the client request Content-Type headers (
    HttpHeaders request_headers = 2;

    ) to detect whether this client request is a deframed gRPC message or not.

No. The content-type of the downstream request says whether it's a gRPC request. But that does not tell the ext_proc server whether the body chunks that we send are chunks of the HTTP/2 DATA frames or deframed gRPC messages.

When Envoy receives a gRPC request and needs to send it to the ext_proc server, there are two possible ways it can do this:

  1. Envoy is configured to send HTTP/2 DATA frames to the ext_proc server. This means that the ext_proc server needs to handle the buffering and deframing of the gRPC messages itself, and that it needs to add the gRPC framing to the body chunks that it sends back to Envoy, since Envoy will interpret those as HTTP/2 DATA frame chunks.
  2. Envoy is configured to send deframed gRPC messages. This means that Envoy will handle the buffering and deframing of gRPC messages, and each body chunk it sends to the ext_proc server will contain exactly one deframed gRPC message. The ext_proc server can therefore process each body chunk as a deframed message, and it can send back individual deframed messages, which Envoy will then re-frame.

Both of these behaviors should be possible, and the downstream gRPC request is exactly the same in both cases, so we cannot determine the behavior based on any property of the downstream request.

  1. The other requirement you want is to have Envoy send one callout message for one gRPC message(as the original HTTP2 raw data frame may either contain a small piece of it, or multiple of it, so you want Envoy to be able to buffer, or split). My suggestion is to do this outside of Envoy ext-proc filter. As you mentioned in A93: xDS ExtProc Support grpc/proposal#484, in gRPC data plane, "the framing/deframing is handled in the transport layer, and filters see only individual gRPC messages". So, my idea is to do framing/deframing in a separate Envoy extension.

I understand your idea. What I've tried to explain here (several times now) is that that approach violates the two requirements that I articulated above.

@yanjunxiang-google
Copy link
Contributor

yanjunxiang-google commented Nov 7, 2025

HI, @markdroth , I think we are getting closer. My question is why Envoy has to support sending both options, i.e, either deframed gRPC or HTTP2 raw data frame? Can Envoy just support sending deframed gRPC, thus no need to encode an extra flag to tell ext_proc server? And for ext_proc server, if the request Content-Type is gRPC, it is only expecting degramed gRPC message, thus no need to buffer/deframe. Is that what gRPC dataplane doing today?

@markdroth
Copy link
Contributor Author

I think ext_proc service owners should be able to choose how the data is sent to them. There may be cases where they want to see HTTP/2 DATA frames instead of deframed gRPC messages. We can't make that choice for them.

Even if you don't think that's important, though, there's a backward compatibility issue here: any existing ext_proc server that is handling gRPC traffic is seeing HTTP/2 DATA frames and handling the gRPC deframing itself, and we can't break that by unconditionally changing the behavior to send deframed gRPC messages instead. We need to introduce the capability to handle deframed gRPC messages in a way that allows existing users to migrate to it without breakage.

@markdroth
Copy link
Contributor Author

Thanks, Yan!

Did you want to consider the approach I described in #38753 (comment)? I am happy to go with something like that if you think it would provide a better story for composability in Envoy.

If not, I'm certainly happy to see this get merged. :) Thanks!

@yanjunxiang-google
Copy link
Contributor

"If Envoy attempts to do gRPC deframing on traffic that isn't actually gRPC traffic, I would expect that deframing to fail in most cases, because the length of the body will not match the length specified by the gRPC frame header. Envoy would probably want to fail the request at that point, because it's received non-gRPC traffic in a place where it was configured to expect gRPC traffic." ----- With that, I would assume Envoy should choose whether perform gRPC deframing based on the traffic itself not based on this new GRPC processing mode configuration. It's does not make sense to me if Envoy ext_proc filter, configured with gRPC mode, to drop all non-gRPC traffic. Then what this new mode configuration can help us? For ext_proc server migration perspective, for existing servers which already support deframing from raw HTTP2 data frames, receiving the already deframed gRPC message will only make things easier for them.

@markdroth
Copy link
Contributor Author

@yanjunxiang-google

With that, I would assume Envoy should choose whether perform gRPC deframing based on the traffic itself not based on this new GRPC processing mode configuration. It's does not make sense to me if Envoy ext_proc filter, configured with gRPC mode, to drop all non-gRPC traffic. Then what this new mode configuration can help us? For ext_proc server migration perspective, for existing servers which already support deframing from raw HTTP2 data frames, receiving the already deframed gRPC message will only make things easier for them.

At this point, I'm feeling like you're just not actually reading what I've been saying, because I've already addressed these questions several times. What you are proposing here simply will not work. Let me try to explain this one more time.

It's true that non-gRPC traffic won't work if we are configured to do gRPC deframing. However, for gRPC traffic, the ext_proc server can handle the traffic either as HTTP/2 DATA frames or as deframed gRPC messages. There are reasons that the ext_proc server may want one or the other, which means that it's something that needs to be explicitly configured, not something we can assume based on the type of traffic.

Yes, it will make things easier for the ext_proc server to receive deframed gRPC messages, but their code still has to be written in a way that can handle the change. If an ext_proc server is currently written to support only HTTP/2 DATA frames, then it will expect to be doing the gRPC deframing itself. If we just suddenly start sending it deframed gRPC messages, it will break.

Note that if an Envoy instance is handling both gRPC and non-gRPC traffic, the ext_proc filter can be configured to do gRPC deframing only for the gRPC methods by using per-route filter config overrides. There is no conflict between doing this for gRPC traffic and not doing it for non-gRPC traffic.

@tonya11en
Copy link
Member

Hey folks, it seems like this PR has stalled out even though there is already senior maintainer approval. Are we waiting on anything specific?

@phlax
Copy link
Member

phlax commented Dec 8, 2025

@leonm1 could you please review again as you have a pending change request

@yanavlasov i think this is waiting on you to land it

Copy link
Contributor

@leonm1 leonm1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to hold this up with the "request changes" bit. I did not realize that was blocking.

@phlax
Copy link
Member

phlax commented Dec 11, 2025

@yanavlasov could you take another look please - there was some discussion after your approval, so i think it needs a final review and/or for you to land

@mathetake
Copy link
Member

kindly ping @yanavlasov for final call merging or are we waiting on anything specific still?

@yanavlasov yanavlasov merged commit 7b3a632 into envoyproxy:main Dec 16, 2025
24 of 25 checks passed
@markdroth markdroth deleted the ext_proc_grpc branch December 16, 2025 20:26
@coolg92003
Copy link
Contributor

hi @markdroth @yanavlasov @mathetake @yanjunxiang-google ,
Please help below issue which add one more space:
image
and
image

thanks
Cliff

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api no stalebot Disables stalebot from closing an issue

Projects

None yet

Development

Successfully merging this pull request may close these issues.