You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 14, 2020. It is now read-only.
kopf 0.23.2 fails when a field in the resource being watched exceeds certain limit
in our case it is a secret with value size (in base64) of about 43KB.
Kubernetes itself accepts such Secret just fine, and Kubernetes docs state that the max secret size is 2MB.
Expected Behavior
everything works
Actual Behavior
kopf crashes and stops with the following error (for full trace see below in the repro) kopf.reactor.running [ERROR ] Root task 'watcher of secrets' is failed: ValueError('Line is too long')
or, if letting users to configure it in kopf is not an option, raise it by default to 2**20 so 2MB passes (translating to 2 ** 21 for self._high_water in the StreamReader)
Specifications
Platform: Ubuntu 18.04
Kubernetes version: 1.15
Python version: 3.7.3
Python packages installed: relevant ones are kopf 0.23.2 and aiohttp 3.6.2
$ cat gen-big-secret.py
import base64
import random
import string
import yaml
SIZE = 2**15 # <-- WHILE BIG ENOUGH, NOT EVEN CLOSE TO 2MB AFTER BASE64 ENCODE
secret = {
"kind": "Secret",
"apiVersion": "v1",
"type": "Opaque",
"metadata": {
"name": "too-big-secret",
"namespace": "kopf"
},
"data": {
"value": base64.b64encode("".join(random.choice(string.ascii_letters) for i in range(SIZE)).encode()).decode()
}
}
with open("too-big-secret.yaml", "w") as f:
yaml.dump(secret, f)
$ python3 gen-big-secret.py
$ ls -lh too-big-secret.yaml
-rw-rw-r-- 1 ubuntu ubuntu 43K Dec 16 15:27 too-big-secret.yaml
$ kubectl apply -f too-big-secret.yaml
Observe failure in the controller:
SSL error in data received
protocol: <asyncio.sslproto.SSLProtocol object at 0x7fab443a1860>
transport: <_SelectorSocketTransport fd=8 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
ssldata, appdata = self._sslpipe.feed_ssldata(data)
File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
self._sslobj.unwrap()
File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
return self._sslobj.shutdown()
ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2609)
[2019-12-16 15:41:45,205] kopf.reactor.running [ERROR ] Root task 'watcher of secrets' is failed: ValueError('Line is too long')
Traceback (most recent call last):
File "/home/ubuntu/kopf-debug/.tox/secrets/bin/kopf", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/cli.py", line 31, in wrapper
return fn(*args, **kwargs)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/cli.py", line 78, in run
vault=__controls.vault,
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/reactor/running.py", line 113, in run
vault=vault,
File "/usr/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/reactor/running.py", line 155, in operator
await run_tasks(operator_tasks, ignored=existing_tasks)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/reactor/running.py", line 341, in run_tasks
await _reraise(root_done | root_cancelled | hung_done | hung_cancelled)
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/reactor/running.py", line 402, in _reraise
task.result() # can raise the regular (non-cancellation) exceptions.
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/reactor/running.py", line 418, in _root_task_checker
await coro
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 101, in watcher
async for event in watching.infinite_watch(resource=resource, namespace=namespace):
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/clients/watching.py", line 64, in infinite_watch
async for event in streaming_watch(resource=resource, namespace=namespace):
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/clients/watching.py", line 94, in streaming_watch
async for event in stream:
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/clients/auth.py", line 80, in wrapper
async for item in fn(*args, **kwargs, session=session):
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/kopf/clients/watching.py", line 161, in watch_objs
async for line in response.content:
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/aiohttp/streams.py", line 39, in __anext__
rv = await self.read_func()
File "/home/ubuntu/kopf-debug/.tox/secrets/lib/python3.7/site-packages/aiohttp/streams.py", line 322, in readline
raise ValueError('Line is too long')
ValueError: Line is too long
Note that we weren't even actually watching this very secret (on.create(.., labels=...)), but since kopf is doing label filtering client-side instead of sever-side, this still happens.
The text was updated successfully, but these errors were encountered:
Hm. After a quick-dive into aiohttp internals, there is no way to configure that limit externally. Specifically, for the StreamReader, the docs say:
User should never instantiate streams manually but use existing aiohttp.web.Request.content and aiohttp.ClientResponse.content properties for accessing raw BODY data.
Neither client session, nor client connector, nor client request/response have this limit or a stream factory exposed for configuration and passed through to the streamer (or I could not find it). Even internally, they do not pass limit=, and use the default.
So, it seems, the only way to work around it, is to write our own per-line async iterator using the size-limited chunks of the response (response.content.read() or response.content.iter_chunked() or any of those methods of response.content), while maintaining an in-memory buffer of infinite or huge size.
kopf 0.23.2 fails when a field in the resource being watched exceeds certain limit
in our case it is a secret with value size (in base64) of about 43KB.
Kubernetes itself accepts such Secret just fine, and Kubernetes docs state that the max secret size is 2MB.
Expected Behavior
everything works
Actual Behavior
kopf crashes and stops with the following error (for full trace see below in the repro)
kopf.reactor.running [ERROR ] Root task 'watcher of secrets' is failed: ValueError('Line is too long')
it seems this has to do with the value of DEFAULT_LIMIT in the aiohttp streams
https://github.com/aio-libs/aiohttp/blob/6a5ab96bd9cb404b4abfd5160fe8f34a29d941e5/aiohttp/streams.py#L20
raising it to appropriate value solves the issue
It would be good to have possibility to pass custom value of limit to the
aiohttp.streams.StreamReader
:https://github.com/aio-libs/aiohttp/blob/6a5ab96bd9cb404b4abfd5160fe8f34a29d941e5/aiohttp/streams.py#L109
or, if letting users to configure it in
kopf
is not an option, raise it by default to2**20
so 2MB passes (translating to2 ** 21
forself._high_water
in theStreamReader
)Specifications
Steps to Reproduce the Problem
(using
kopf
namespace as example)Note that we weren't even actually watching this very secret (
on.create(.., labels=...)
), but since kopf is doing label filtering client-side instead of sever-side, this still happens.The text was updated successfully, but these errors were encountered: