-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calls from Python are ~160 µs #24
Comments
There's been zero attention to performance, but certainly SBCL is doing a lot of work behind the scenes to make everything work smoothly. Performance-wise, there's a hash table overhead for every handle created, and otherwise there's a fair bit of overhead for dealing with context switching in the Lisp callable machinery. |
I do not deny that call overheard may be worth improving, but I would prefer to have a use-case that is being genuinely and seriously hampered by the current performance characteristics, along with a reasonable expectation of what the performance ought to be, before branding the current iteration as being not good enough. |
One way to mitigate call overhead is to move loops into Lisp where possible. One could also treat the Lisp/Python boundary as akin to system calls in terms of overhead. Agree with @stylewarning that the adjective 'too high' makes no sense in context. Can you measure this overhead against plain python ffi into a C function? There's also I believe the additional overhead of creating a new thread on each Lisp call invocation, which is probably unnecessary. |
Benchmarking suite reveals following timings
|
Pasting some additional (albeit less relevant) benchmarks here from an internal repo:
|
After testing, it is found that the interaction between Python and SBCL is too expensive. Is there a solution now?
`expr = byref(etl.expr_type())
s = "123123".encode('utf-8')
stime = time.time()
for _ in range(10000):
etl.etl_process(s, expr) #SBCL empty function implementation
print(time.time() - stime) #--> 1.56 s
`
The text was updated successfully, but these errors were encountered: