Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calls from Python are ~160 µs #24

Open
CL98K opened this issue Mar 19, 2022 · 5 comments
Open

Calls from Python are ~160 µs #24

CL98K opened this issue Mar 19, 2022 · 5 comments

Comments

@CL98K
Copy link

CL98K commented Mar 19, 2022

After testing, it is found that the interaction between Python and SBCL is too expensive. Is there a solution now?

`expr = byref(etl.expr_type())
s = "123123".encode('utf-8')

stime = time.time()

for _ in range(10000):
etl.etl_process(s, expr) #SBCL empty function implementation

print(time.time() - stime) #--> 1.56 s

`

@karlosz
Copy link
Collaborator

karlosz commented Mar 19, 2022

There's been zero attention to performance, but certainly SBCL is doing a lot of work behind the scenes to make everything work smoothly.

Performance-wise, there's a hash table overhead for every handle created, and otherwise there's a fair bit of overhead for dealing with context switching in the Lisp callable machinery.

@stylewarning
Copy link
Member

I do not deny that call overheard may be worth improving, but I would prefer to have a use-case that is being genuinely and seriously hampered by the current performance characteristics, along with a reasonable expectation of what the performance ought to be, before branding the current iteration as being not good enough.

@stylewarning stylewarning changed the title Python Call overhead is too high Calls from Python are ~160 µs Mar 20, 2022
@karlosz
Copy link
Collaborator

karlosz commented Mar 20, 2022

One way to mitigate call overhead is to move loops into Lisp where possible. One could also treat the Lisp/Python boundary as akin to system calls in terms of overhead.

Agree with @stylewarning that the adjective 'too high' makes no sense in context. Can you measure this overhead against plain python ffi into a C function?

There's also I believe the additional overhead of creating a new thread on each Lisp call invocation, which is probably unnecessary.

@karlosz
Copy link
Collaborator

karlosz commented May 3, 2022

Benchmarking suite reveals following timings

                          Timing pure python overhead                           
--------------------------------------------------------------------------------
timing void -> void for 20000 repetitions:                   0.0012679100036621094
timing void -> False for 20000 repetitions:                  0.001280069351196289


                          Timing python -> C overhead                           
--------------------------------------------------------------------------------
timing void -> void for 20000 repetitions:                   0.004410982131958008
timing void -> void* for 20000 repetitions:                  0.007209062576293945


                    Timing python -> C -> Lisp FFI overhead                     
--------------------------------------------------------------------------------
timing void -> void for 20000 repetitions:                   0.2764558792114258
timing void -> nil for 20000 repetitions:                    0.47957706451416016
timing string -> lisp_obj for 20000 repetitions:             0.46479296684265137
timing lisp_obj -> lisp_obj for 20000 repetitions:           0.5968728065490723

@kartik-s
Copy link
Contributor

kartik-s commented Aug 21, 2023

Pasting some additional (albeit less relevant) benchmarks here from an internal repo:

$ ../run-sbcl.sh --script fcb-windows.lisp
control (inline, no call)
elapsed time: 0.119883 seconds

regular C call (C -> C)
elapsed time: 0.124107 seconds

alien callback (C on Lisp thread -> Lisp)
elapsed time: 0.317698 seconds

foreign callback (C on foreign thread -> Lisp)
elapsed time: 393.213626 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants