-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Open
Description
I currently ran in the downstream project graphql-python/graphql-core in an recursion issue (string input graphs with depths around >200 fail).
It seems to be a construction problem in the base implementation (recursive design). And the token limiting is also useless because such deep graphs will fail roughly at the same level for the js implementation.
There is a simple fix: the "stack free" (not really stack free) generator pattern, you save the current state via generator and pushes nodes and new function calls at two stacks.
here an example (python, sry not ported yet):
from functools import partial
def _foo(level=0, *, get_result):
if level > 100000:
yield level
return
print("1", level)
yield partial(_foo, level=level + 1)
print("2", get_result())
yield level
def foo():
result_stack = []
fn_stack = [_foo(get_result=result_stack.pop)]
while fn_stack:
cur_el = fn_stack[-1]
try:
next_el = next(cur_el)
if isinstance(next_el, partial):
fn_stack.append(next_el(get_result=result_stack.pop))
else:
result_stack.append(next_el)
except StopIteration:
fn_stack.pop()
return result_stack.pop()
print("final", foo())
Links:
- construction fails for depth around > 200 / resource exhaustion? graphql-python/graphql-core#216 Issue reported downstream
- https://github.com/devkral/graphene-protector Project, in which I use the generator pattern
Activity
devkral commentedon Mar 20, 2024
I first misread the code. You seem to be affected the same.
One way to prevent the stack issue would be to transform many calls into generators
yaacovCR commentedon Mar 29, 2024
I think a max-depth guard as suggested in the linked issue sounds like a reasonable first step.
This project is open-source and so a PR to introduce “stack-free” execution should be welcome. One issue to consider is that the reference implementation should try to match the specification, which has a recursive design, but that’s not necessarily a complete block — It might depend on how transformative the PR would be. It could also depend on what performance profile the change has. If it comes with a significant improvement, for example, I imagine that it would be easier to advocate for its inclusion.
devkral commentedon Apr 12, 2024
what do you think about the idea to use a rust parser for more performance and to easily synchronize the fix between both (and probably more) projects?
Rust can be compiled to webassembly so there is no compatibility issue here (and for python there are also bindings)
devkral commentedon Apr 12, 2024
I lack knowledge of rust and can learn it just slowly (not much time)