-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU Usage with Audio playback #185
Comments
Yeah I made sure to build it with Here is the full benchmark that i got on the EC2 Instance:
|
PR #179 Does seem to improve this significantly with per stream CPU usage at 1.6% with 1.9% peaks. Which is much better but still 0.3% Per Stream off from Lavalink though that is probably good enough for most applications. |
Thanks for running the benchmarks locally, that's pretty disappointing (but I'm glad #179 has a big effect). I can think of some remaining places that are less than optimal (e.g., event thread wakeups regardless of handler registration). The only other thing that comes to mind is maybe the async->sync adapter adding more memcpys in the Http case. What I can recommend today is setting softclip to false for a marginal improvement -- it's enabled by default to prevent folks from blowing out someone's eardrums by naïvely playing too many audio sources at once. There are one or two things in symphonia that trip us up here: because it works in planar audio, we're doing a lot of copying on WAV to achieve interleaved (wav) -> planar (mixing) -> interleaved (libopus). I don't think those can be fixed without rearchitecting. |
Alright thanks for the advice. |
Songbird version:
next
branch and 0.3.2Rust version (
rustc -V
): 1.69Serenity/Twilight version: serenity: 0.11.5
Description:
I'm getting really high (relatively) CPU usage when playing back audio using either the
next
branch and songbird 0.3.2 (I haven't tested any other versions). It seems to use about 4 - 10% extra CPU per concurrent stream.Which is a bit of a problem because at that rate you may only get 10 - 20 concurrent streams. I created a flamegraph with
cargo flamegraph
for both thenext
branch and 0.3.2. It seems like a lot of the time is spent on opus encoding (Im using a WAV file as a source). But more generally that a lot of minor inefficiencies seem to be adding up to a larger issues with performance.For a quick comparison with the same load lavalink peaks at around 3.3%.
I have attached the flamegraphs below. It does seem like the
next
branch version is somewhat better at CPU utilization but only by about 2%. I'm measuring CPU usage with thetop
command If that's important. Im using an AWS EC2 Instance with 8GB of memory and four Intel Cores.Steps to reproduce:
NEXT:
![flamegraph-next-branch](https://private-user-images.githubusercontent.com/41460735/239649113-984c10ad-b958-4e0f-816a-5b10f7857a85.svg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0Mjc2MDksIm5iZiI6MTczOTQyNzMwOSwicGF0aCI6Ii80MTQ2MDczNS8yMzk2NDkxMTMtOTg0YzEwYWQtYjk1OC00ZTBmLTgxNmEtNWIxMGY3ODU3YTg1LnN2Zz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDA2MTUwOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE2NzE4YWFhNjg1MTMxNzhkNGE0NDcxNDBiM2Q1MWVhYmQ4OWU4NTA5ZmM2NDEwMmI4OTFmZTlhZmNkZDk5NzAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.upjrJTij6bbq0VbIHGpqVtWj7w4_0G0DomV6LaasQTE)
![flamegraph-0 3 2-songbird](https://private-user-images.githubusercontent.com/41460735/239647030-4831ce01-b26c-4496-9372-b6d8198fb538.svg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0Mjc2MDksIm5iZiI6MTczOTQyNzMwOSwicGF0aCI6Ii80MTQ2MDczNS8yMzk2NDcwMzAtNDgzMWNlMDEtYjI2Yy00NDk2LTkzNzItYjZkODE5OGZiNTM4LnN2Zz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDA2MTUwOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWU0MGU5NGEyY2VlNmRjMTliMGZiYzU2N2RhODNkNTY4MDk3NzdkNDI2ZmJhOTU4MGY5NTY4OTkxMjg4ODQ2NDQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.rmLmmKF5y9kerq68nQRu21hmHYQtKmOsc3BQ0dMDXxs)
0.3.2:
The text was updated successfully, but these errors were encountered: