Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding songs to the queue is slow #57

Open
Phrogz opened this issue Feb 1, 2016 · 7 comments
Open

Adding songs to the queue is slow #57

Phrogz opened this issue Feb 1, 2016 · 7 comments

Comments

@Phrogz
Copy link
Contributor

Phrogz commented Feb 1, 2016

On my live music server:

@mpd.add( song )         # takes about 40ms
`mpc add "#{song.file}"` # takes about  5ms

This may seem trivial, but when dynamically generating the queue with 100 songs this is the difference between waiting 4 seconds versus half a second.

I wonder if this is the @mutex.synchronize slowing me down; I'll be looking at ways to speed up the batch adding of many songs.

@Phrogz
Copy link
Contributor Author

Phrogz commented Feb 3, 2016

I've fixed this in my fork by implementing command_list. I can now add songs at about 3-4ms per song in batch mode.

I'll try to find time to create a pull request for command_list.

@archseer
Copy link
Owner

Hmm, would you mind getting me a profile of that method call? http://tmm1.net/ruby21-profiling/

Would be good to see where we're spending most of the execution time.

@Phrogz
Copy link
Contributor Author

Phrogz commented Feb 21, 2016

Stackprof did not work for me: tmm1/stackprof#63

So, instead I manually profiled the send_command method and its three child methods, convert_command, handle_server_response, and parse_response. I did this by wrapping each method in:

t1 = Time.now
# code from the method
last_statement.tap{ $prof[:methodname] << (Time.now-t1) }

and creating a global $prof hash with pre-allocated arrays. The full results can be seen here:
https://docs.google.com/spreadsheets/d/1JNUweNAmfEEqbj7wqTXB6A3hxgYwBlAhw1rmuHosodA/edit?usp=sharing

The summary is:

function_call               avg ms   median ms
----------------------------------------------
send_command                45.863      40.006
  convert_command            0.047       0.044
  handle_server_response    45.777      39.907
  parse_response             0.004       0.004

In other words, ~all (99.8%) of the time is spent waiting on gets from the socket. This profiling code was run in a live environment, where (as normally happens) another process is polling the MPD server status every 500ms.

@Phrogz
Copy link
Contributor Author

Phrogz commented Feb 21, 2016

Using a different profiling mode with stackprof I got results from it. Not sure how much to trust them, though, since the results are so different for intervals of 1000µs (1ms), 500µs, 100µs, 50µs, and 5µs:

==================================
  Mode: wall(1000)
  Samples: 131 (97.13% miss rate)
  GC: 1 (0.76%)
==================================
     TOTAL    (pct)     SAMPLES    (pct)     FRAME
       107  (81.7%)         107  (81.7%)     MPD#handle_server_response
        12   (9.2%)          12   (9.2%)     MPD::Parser#quotable_param
       127  (96.9%)           3   (2.3%)     block in MPD#send_command
        15  (11.5%)           3   (2.3%)     block in MPD::Parser#convert_command
         2   (1.5%)           2   (1.5%)     MPD#socket
        16  (12.2%)           1   (0.8%)     MPD::Parser#convert_command
       130  (99.2%)           1   (0.8%)     MPD::Plugins::Queue#add
         1   (0.8%)           1   (0.8%)     Set#include?
       130  (99.2%)           0   (0.0%)     block in RubyLex#each_top_level_statement
       130  (99.2%)           0   (0.0%)     block (2 levels) in IRB::Irb#eval_input


==================================
  Mode: wall(500)
  Samples: 120 (98.68% miss rate)
  GC: 0 (0.00%)
==================================
     TOTAL    (pct)     SAMPLES    (pct)     FRAME
       108  (90.0%)         108  (90.0%)     MPD#handle_server_response
         6   (5.0%)           6   (5.0%)     MPD::Parser#quotable_param
       120 (100.0%)           2   (1.7%)     block in MPD#send_command
         8   (6.7%)           2   (1.7%)     MPD::Parser#convert_command
         1   (0.8%)           1   (0.8%)     MPD#socket
         1   (0.8%)           1   (0.8%)     MPD::Parser#parse_response
       120 (100.0%)           0   (0.0%)     Object#irb_binding
       120 (100.0%)           0   (0.0%)     IRB::WorkSpace#evaluate
       120 (100.0%)           0   (0.0%)     IRB::Context#evaluate
       120 (100.0%)           0   (0.0%)     block (2 levels) in IRB::Irb#eval_input


==================================
  Mode: wall(100)
  Samples: 193 (99.58% miss rate)
  GC: 1 (0.52%)
==================================
     TOTAL    (pct)     SAMPLES    (pct)     FRAME
       113  (58.5%)         113  (58.5%)     MPD#handle_server_response
        37  (19.2%)          37  (19.2%)     MPD::Parser#quotable_param
        54  (28.0%)          12   (6.2%)     MPD::Parser#convert_command
       185  (95.9%)          12   (6.2%)     block in MPD#send_command
        42  (21.8%)           5   (2.6%)     block in MPD::Parser#convert_command
         5   (2.6%)           5   (2.6%)     MPD#socket
       192  (99.5%)           4   (2.1%)     MPD::Plugins::Queue#add
         2   (1.0%)           2   (1.0%)     Set#include?
         4   (2.1%)           2   (1.0%)     MPD::Parser#parse_response
       192  (99.5%)           0   (0.0%)     #<Module:0x000000021b7298>.start


==================================
  Mode: wall(50)
  Samples: 306 (99.66% miss rate)
  GC: 4 (1.31%)
==================================
     TOTAL    (pct)     SAMPLES    (pct)     FRAME
       128  (41.8%)         125  (40.8%)     MPD#handle_server_response
        72  (23.5%)          72  (23.5%)     MPD::Parser#quotable_param
       287  (93.8%)          46  (15.0%)     block in MPD#send_command
       103  (33.7%)          19   (6.2%)     MPD::Parser#convert_command
        19   (6.2%)          19   (6.2%)     MPD#socket
        84  (27.5%)          12   (3.9%)     block in MPD::Parser#convert_command
         3   (1.0%)           3   (1.0%)     Set#include?
         6   (2.0%)           3   (1.0%)     MPD::Parser#parse_response
       302  (98.7%)           3   (1.0%)     MPD::Plugins::Queue#add
       302  (98.7%)           0   (0.0%)     block in RubyLex#each_top_level_statement


==================================
  Mode: wall(5)
  Samples: 28041 (97.16% miss rate)
  GC: 59 (0.21%)
==================================
     TOTAL    (pct)     SAMPLES    (pct)     FRAME
      9020  (32.2%)        9020  (32.2%)     MPD#socket
      8348  (29.8%)        8271  (29.5%)     MPD#handle_server_response
      3354  (12.0%)        3354  (12.0%)     MPD::Parser#quotable_param
      2329   (8.3%)        2329   (8.3%)     Set#include?
      4984  (17.8%)        1630   (5.8%)     block in MPD::Parser#convert_command
      1453   (5.2%)        1453   (5.2%)     MPD#connected?
      3089  (11.0%)         760   (2.7%)     MPD::Parser#parse_response
     21574  (76.9%)         472   (1.7%)     block in MPD#send_command
      5387  (19.2%)         403   (1.4%)     MPD::Parser#convert_command
     27975  (99.8%)         120   (0.4%)     MPD::Plugins::Queue#add

@Phrogz
Copy link
Contributor Author

Phrogz commented Feb 21, 2016

Following up from discussion over on the stackprof issue: the "97% miss rate" indicates that the profiler is not able to see most of the code, because it's being blocked by socket.gets. So, the stackprof results are not great in this case.

The only thing it did point out was that handle_server_response is calling MPD#socket for each line of response. Though this is just an ||=, it does have the overhead of a method call. In b01adf49 I've patched the method to cache the socket at the beginning of the call.

@archseer
Copy link
Owner

archseer commented Mar 5, 2016

I think this is most likely caused on the server side then, since we're waiting for a response to each command. I think what MPC does is completely ignore the response from the server

@archseer
Copy link
Owner

archseer commented Mar 5, 2016

You could also try using a UNIX socket, I think those should be faster than TCP for local connections

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants