Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add parallel=False flag to Group() #272

Merged
merged 3 commits into from
Nov 10, 2022

Conversation

gitmarek
Copy link
Contributor

@gitmarek gitmarek commented Oct 27, 2022

...to allow allocation of parallel groups on supernova.

Ref.: #gitmarek@cef6ef8#r87738788

Locally, all tests passed, so it looks like I didn't break anything. Before squashing anything, I'm going to test it manually and wait for supernova to be deployed on GHA to try to write some unit tests too...

@josiah-wolf-oberholtzer
Copy link
Contributor

Honestly, if you have time, tinker with what's going on in #271 and see if you can make supernova run inside GHA. I won't be able to devote much time to that until next week.

@josiah-wolf-oberholtzer
Copy link
Contributor

Testing specifics aside, this looks like the implementation I was thinking of.

If we can't get supernova to run inside GHA in a reasonable timeframe, you can always write tests decorated with pytest.mark.skipif(os.environ.get("CI") == "true") (or something similar) to skip them inside GHA.

@gitmarek
Copy link
Contributor Author

Tested the code with my local setup. Seems to work fine.

I run scsynth first:

scsynth -l 8 -u 12345

and the following Python code:

import supriya

s = supriya.Server().connect(ip_address="127.0.0.1", port=12345)

s.add_group()
s.add_group(parallel=True)

With sclang, I query the server (see ref: #gitmarek@cef6ef8#r87840094):

// sclang: 
s = Server(\test, NetAddr("127.0.0.1", 12345), ServerOptions())
s.initTree;
s.queryAllNodes;

and get the following from scsynth in the terminal:

NODE TREE Group 0
   1 group
      1001 group
      1000 group
   2 group
   3 group
   4 group
   5 group
   6 group
   7 group
   8 group

Then I run supernova:

supernova -u 12345 

same Python code and the same sclang query, and get:

NODE TREE Group 0
   0 group
      1 group
         1001 parallel group
         1000 group

@gitmarek
Copy link
Contributor Author

Started to test it in another branch: gitmarek#5. Will rebase once #274 or similar is merged

@josiah-wolf-oberholtzer
Copy link
Contributor

I merged #274 in (thanks!). Go ahead and rebase whenever.

@gitmarek gitmarek mentioned this pull request Oct 31, 2022
Add paralleg group flag to Providers

Remove lint
@gitmarek
Copy link
Contributor Author

gitmarek commented Nov 8, 2022

The same choice between commands.GroupNewRequest and commands.ParallelGroupNewRequest is now handled in three different places. Perhaps it would be better to add parallel: bool to commands.GroupNewRequest itself? It looks practical to me but makes this particular request very special.

Also, I see two ways how to test if the flag works.

  1. Allocate a parallel group and check what OSC message Supriya has sent to the server.
  2. Query the server with RequestId.GROUP_DUMP_TREE and read the output from the server's stdout. This way, we can be sure supernova allocates the correct group type, but a test like that would probably require much maintenance.

@josiah-wolf-oberholtzer
Copy link
Contributor

josiah-wolf-oberholtzer commented Nov 8, 2022

The pattern I've followed to date is that the low-level "command" classes like GroupNewRequest map one-to-one with the published commands in Server Command Reference. Let's keep it like that and just use the ParallelGroupNewRequest as its own command, switching internally. There's more to type in the "backend", but very little of it is user-facing.

For testing, use the OSC-sniffing technique with server.osc_protocol.capture() as transcript:. There are some examples of this in the realtime tests: https://github.com/josiah-wolf-oberholtzer/supriya/blob/main/tests/realtime/test_Group.py#L1373-L1411. It's enough to know that we sent the /p_new message to the Server. That's our contract. If scsynth or supernova doesn't fulfill that contract on their end, that's not our responsibility. Don't worry about implementing a usable GROUP_DUMP_TREE at the moment.

@josiah-wolf-oberholtzer
Copy link
Contributor

josiah-wolf-oberholtzer commented Nov 8, 2022

Also want to say that getting GROUP_DUMP_TREE to work is compelling. Testing large node trees with GROUP_QUERY_TREE can overflow the 8192 byte packet limit, making it unusable, and I never finished implementing the /s_info command for querying a single synth's SynthDef and values.

But it's going to be tricky for a couple of reasons:

  • The "sync" Server doesn't continue to read lines from its process output after booting, so we'll have to decide on a mechanism; the AsyncServer does continue to read continuously, making it a simple proposition there...
  • There's no "sentinel" value to indicate when the /g_dumpTree output finishes, so some kind of timeout mechanism would be necessary to prevent blocking forever while waiting for input

if node.parallel:
request_method = supriya.commands.ParallelGroupNewRequest
else:
request_method = supriya.commands.GroupNewRequest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the three-line construction you use elsewhere:

request_method = supriya.commands.GroupNewRequest
if self._parallel:
    request_method = supriya.commands.ParallelGroupNewRequest

Copy link
Contributor

@josiah-wolf-oberholtzer josiah-wolf-oberholtzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Let me know if you're satisfied and I'll merge and cut a new release.

@gitmarek
Copy link
Contributor Author

👍

@josiah-wolf-oberholtzer josiah-wolf-oberholtzer merged commit 87222ec into supriya-project:main Nov 10, 2022
@josiah-wolf-oberholtzer
Copy link
Contributor

@gitmarek I'm holding off on releasing until I get OSX builds working again in GHA.

@gitmarek gitmarek deleted the pargroup branch November 17, 2022 12:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants