-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support large ensembles (> 100 connected clients) #339
Comments
How many clients do you have to support? Have you tried it out already? |
I would like to support 120. And I am trying to think not just about the server capacity, but the client user interface also. So far the largest group I've been in is 30. The server had 8 CPUs but my impression is that one of them was at 100% and the others were mostly idle. The user interface would have been improved if it was a grid instead of a single line. |
This is the only meaningful solution. The OPUS encoding requires the most processing time. This could easily be calculated on different CPU cores.
The biggest challenge in that case is support all 120 musicians to setup their Jamulus clients correctly. I guess most of them will use ASIO4All together with the laptop built-in sound card. This will give them bad latencies and they will not have much fun playing together.
Are you sure every of the 120 musicians will take the time to adjust all 120 faders? I don't think so. If you have so many musicians, each of them will have to adjust their input volume so that they all have about equal level. Then there is no need to touch any fader. |
If you make it, you should apply for here: https://www.guinnessworldrecords.com/business-marketing-solutions/record-event-formats/online-records ;-) |
Indeed one thing i liked about the 'multiple server' approach is that musicians would only see faders for the others in their section, a section leader would control the section's mix that got sent upstream to the conductor's sever, and the conductor would see only one fader per section. Nobody would ever need to deal with 120 faders on a single screen. |
That is no good solution in my opinion. You'll add additional latency and you will also have problems with the synchronization. You need a single point where all the audio streams are mixed together, otherwise you would have to compensate for the different delays. One single server is the way to go here.
There exists already a similar feature request: #202 |
What if the up- and down- stream between servers was uncompressed audio and didn't go through Opus? Gigabit ethernet in the hosting centre should be able to handle that, I think. |
I am sure with using the OMP and with 8 CPU cores in your server you will be able to serve 120 clients. |
Yes you are right about the multi-processing on the server. |
No, that is not possible. Each client has it's own mix. And I think that is useful since in a real orchestra you hear the instruments which are close to you much louder than the others which are far away from you. In your personal mix you can configure the same. |
Just for your information during the world Jam on Saturday I bought a dedicated Google VM with 8 cores and 16GB RAM plus 200GB SSD. We were able to reach about 35 people on the room all jamming together, server was running at 110% cpu utilisation across all 8 cores, but the server didn't crash as it had done on the 4 core version I was running the previous week at 28 people. |
That is very interesting. The main Jamulus processing routine is a single CPU core implementation. There is no multi-threading implemented yet. Now the question is why you are seeing 8 cores busy when the Jamulus server runs. Maybe the Google servers do something smart with the running applications? Maybe they split the work themselfs somehow and distribute it on all available processors. But if this is the case, how do they do it? Also it could be that simply the CPU monitoring tool shows incorrect data... Anyway, having 35 clients connected to the Jamulus server is very impressive :-). |
Another input: This user reports with 35 connected clients a CPU usage of only 17%: https://sourceforge.net/p/llcon/discussion/musicianslounge/thread/4702d9fae1/#86b5 |
I did not know that you have a Github account and read the Issues. Thanks for your screen shot. I have modified it by making all the IP addresses invisible (for privacy reasons).
Please report here when you have done this test. |
Oh thank you! That was a NEWBIE in action. Much appreciated. I'll let you know how it goes. Looking to get all windows users off of ASIO4all. Nothing but problems with ASIO4all for the novice users. I conduct a 70 piece community college orchestra and a 25 piece big band. We (and every other music educator in the world) are wondering how to rehearse our groups when school begins again in August. Thank you for your hard work. |
I suppose this is a 8-core cpu. the 13% usage for Jamulus represents 100% of a single core. On a linux server, under a similar load, 'top' would report 100% usage (out of a total possible 800%). |
It's good but what about this:
|
Yes, this would be a possible solution. But I am more a fan of "keep it simple and stupid" and prefer little incremental changes to support a new use case. The "slim fader" would be much easier to implement and be a straight forward change in the Jamulus software. |
Let's see what JimMooy reports when he has finished his second test. @JimMooy Maybe the next time you should also make a screen shot of the individual CPU cores load like this: |
Yes and for small screens or large groups, it will be a very welcome improvement. Do you think we could have the musican's initial (the first letter of their name) instead of the instrument number? |
Are you referring to my screen shot? The number in my screen shot is actually the name. I just used a number as an example but you could use any number or letter there. |
I just added the code to the Git master. If you have the possiblity to compile the code and want to test it, you can do it now. |
I think you should keep "Slim Channel" as your rap name :) It looks great on Mac and Linux. Could be slimmed down a bit in the future by not showing the full-length name. And... I see the icons vary quite a bit in width. |
Good work! I would name it "Compact Channel View" or something similar. Also, for this use case it's probably valuable to force the channel widths to the minimum (ie in this case, all forced to a width like Eli, Vik or V in the screenshot above) and if the name is longer, show the details via hover/tooltip |
I was thinking something like a pseudo-device File to select in the Settings menu, and open a wav/mp3/whatever is easier to implement from there and stream it in a loop once upon connection. |
I do not think it makes sense to implement this. When I do my multi-threading tests, I am running multiple Jamulus instances under Windows which works just fine for me. |
As written in #375, if I use |
Confirmed, the audio is good now, for me. |
OK, sounds fair. Do you mind to detail how you test with multiple clients so we can help testing and reproduce? Do you use the same audio source for all clients? Each client a different profile or just a diff --clientname? |
Yes, it is the same audio source. That was ok for my tests because I was only interested in the CPU usage when I was working on the OMP implementation.
Well, basically I just started the Jamulus client multiple times. That works with my ASIO driver, fortunately. |
Some observations with the multithreaded server, on two different laptops. With just one client connected: With 4 cores, there are 5 processes With 8 cores there are 9 processes On the server with 4 cores, I had 10 people on the server. CPU was around 295. I suspect that was 3x95% but I wasn't watching closely. |
Thanks for the info. Next step is to find out how to reduce the OMP overhead to get the CPU load much lower. Let's see if that is possible... |
Hi Volker, I have a chain of private servers for choral use on AWS and GCP and am wondering... |
Maybe the solution is to identify a higher loop/fork point earlier in the process, and let every thread process it's own timer (or maybe use the OpenMP Task directive somehow?). That way there's the fork/join thread creations/destructions are just processed once per server session. |
See my above comment: #339 (comment) |
I am looking for a simple solution. Are you experienced with OpenMP Task directives? |
Not really, but I'll keep looking for a workable solution on threads reusability and run some tests. |
Please note: Please try to stick to what I said above: "I am looking for a simple solution.". Have you tried out the current OMP implementation with multiple CPU cores and a lot of connected clients? I know that the OMP overhead is significant but I would also like to know how good the spreading of CPU tasks is now spread over multiple cores. |
Thanks @corrados I was looking at the code these past days and thinking about the cpu-i/o load increase when a high number of clients is connected, and wonder if anyone run a profiling of the app on that test case to verify where the critical points are? BTW, should we move to an specific thread on server performance to discuss specifically everyone findings? |
Just create a new one if you like. |
I like it as extremely compressed UI for specific use cases (like large ensembles), but as the controls' label aren't explicit on the actions anymore, it will be good if you can add hover tooltips. |
These hover tooltips are already implemented.
I would want to avoid adding a new skin for that. If you only have a few musicians connected than you can use the Normal skin. |
To make a cross reference: brynalf successfully served 100 clients in his local area network with his 32 logical processor PC using the latest Git master code: #455 (comment) |
There is a new experimental server mode in developement to support large ensembles, see: #599. |
With the latest changes to the multithreading code it is now possible to support >100 clients. So the initial request of this issue is solved.
This has been worked on here: https://github.com/corrados/jamulus/tree/feature_singlemixserver. Of course we still have outstanding issues in that area but these should be discussed in this Issue: #455. So I'll close this issue now. Please continue the discussion about this topic in the Issue #455. |
For anyone trying to start Jamulus Server on MacOS with more than 10 participants, this is the command you need to run in your terminal
|
I would like to open a discussion about improving the Jamulus user experience for large ensembles. My understanding is that the current Jamulus server will use only a single CPU core, and that it generates a personal mix for each connected client.
One potential solution could be a server mode in which a single mix is generated, then potentially the server would have less work to do, and could therefore handle more connected clients. I image the client who occupies the first space on the server would be in control of the mix for all participants.
A second potential solution would be the ability for a server (with mixer controls on the server UI) to also act as a client to another server. In this case all the violins could join server A, all the cellos could join server B, and servers A and B could join server Z. The conductor would connect his client to server Z and have a mixer control for each section. In this solution, larger ensembles would simply require more servers. Delay would be mitigated by having multiple servers at the same hosting centre, even indeed on the same multi-core VM, so the ping time among all the servers is 0.
A third potential solution would be to have the server use multiple threads to generate mixes in parallel.
I would appreciate hearing what people think of these approaches, and I would like to hear about any other approaches that people can think of.
The text was updated successfully, but these errors were encountered: