Skip to content
This repository has been archived by the owner on Nov 22, 2024. It is now read-only.

Cloudflow operator crashes when it handles very large stream of k8s events. #607

Closed
RayRoestenburg opened this issue Jul 27, 2020 · 1 comment

Comments

@RayRoestenburg
Copy link
Contributor

Describe the bug
The cloudflow-operator (version 2.0.5) crashes with an error:
2020-07-24 12:20:23,057 ERROR [Materializer] - [app-event] Upstream failed.
akka.http.scaladsl.model.EntityStreamSizeException: EntityStreamSizeException: incoming entity size (while streaming) exceeded size limit (67108864 bytes)! This may have been a parser limit (set via akka.http.[server|client].parsing.max-content-length), a decoder limit (set via akka.http.routing.decode-max-size), or a custom limit set with withSizeLimit.

Even though the limit is set to 64MB.

To Reproduce
Hard to reproduce, since you need a serious amount of events in a kubernetes cluster.

Expected behavior
The cloudflow-operator should use chunks and not get into entity size issues, as long as every single entity does not exceed the limit:
https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks
Also see doriordan/skuber#258 for a known issue in skuber. We'll need to use the limit and continue options in skuber or create a PR on skuber to do this automatically.

Additional context

@DarthKrab reported this via gitter: https://gitter.im/lightbend/cloudflow?at=5f1ae0852779966801fb73ee

@RayRoestenburg
Copy link
Contributor Author

Fixed by not setting a limit for entity size #908

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant