Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an /n_status Message and equivalent messages #13

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

muellmusik
Copy link

@muellmusik muellmusik commented Jan 12, 2021

Summary

Add an /n_status message to the OSC command interface, to allow you to check on whether a node currently exists and if it is running or paused.

Motivation

Given the complications of asynchronous communication, it can be useful to check whether a node already or still exists. (A recent case arose in the VST interface). Currently there is no straightforward way to do this. One can send an /n_query but if the node does not exist this will cause a FAILURE message to be posted (in this case the node not existing is not a failure, just info that we need). This seems like a basic thing that we should be able to query. It would also be useful in multi-client situations, where another client may have created a node.

The existing NodeWatcher approach does not suffice here, since there is no way to reliably register a node that may not exists. Enabling this would require such a message.

As we're doing this we may as well include info on whether the node is running or paused. Anything else?

Additional context moved here by request:

The specific context this arose from was using the VST plugin extensions with NodeProxies. It's complicated, but you can see the discussion here:

https://www.listarc.bham.ac.uk/lists/sc-users/msg69122.html

tldr: NodeProxies defer node creation to a scheduled event on their internal clock (I assume for quantisation purposes, but there may be other reasons). This makes the usual approach of wrapping code in a bundle with /sync messages impossible.

A NodeWatcher would seem to be the obvious goto for that problem, but since nodes are not watched by default, and server state is not necessarily tracked, it's not currently possible to use NodeWatcher with a node whose state cannot be inferred. This would also solve that problem more general problem.

Multi-client situations, which could include varying and/or not known client communities would also have uses for being able to query the existence of a node. A direct to server query would be faster, more reliable and less cumbersome than client to client queries. This functionality could help with some discovery applications, and with querying state of individual nodes after a crash, when language state is lost.

Finally, AFAIK there is no way to query a node's run state, which this RFC would also support, so perhaps further weight for an addition. I am fairly sure there would be other applications which I've not thought of.

Specification

/n_status - Check whether a node exists and its run status.

N * int node ID
The server sends an /n_status.reply message for each node to registered clients.

/n_status.reply has the following arguments:

int node ID
int status flag, 1 if running, 0 if paused, -1 if node was not found

In addition we will add a corresponding message to the Node class, and optionally a checkNodeStatus method to the Server class. (Thoughts on this welcome!) Other places in the class lib?

Drawbacks

Adds another command to the interface.

Unresolved Questions

Whether to add a method to Server. My instinct is no, as it's a bit odd style, and Server is already huge.

@Sciss
Copy link

Sciss commented Jan 12, 2021

An entirely alternative approach would be, of course, to reply to /n_query with /n_info even when the node does not exists, insteaf of /fail, simply indicating that status by setting the node's parent group ID to -1, i.e.

[ "/n_query", 1234 ]

->

[ "/n_info", 1234, -1, -1, -1, -1]

I don't know if anyone is relying on /n_query returning /fail.

@Sciss
Copy link

Sciss commented Jan 12, 2021

Also, regarding your post on sc-dev, I do think it would be useful to clarify the behaviour and send /fail to /n_query even when /error, -1; which then would also solve the task of identifying whether a node exists.

@scztt
Copy link

scztt commented Jan 12, 2021

@muellmusik Could we include some more specific use-cases in this RFC, to make sure they're appropriate and covered by any eventual decision? It's common for API's, especially async ones or ones sensitive to timing / race conditions, to have a model where you assume existence / lack of existence when calling, and are notified if your assumptions are incorrect by an error response. It sounds like there are cases in mind which would not be covered by this pattern, but all the cases I can think of seem like they could be handled by simply providing better error responses for existing APIs?

@muellmusik
Copy link
Author

An entirely alternative approach would be, of course, to reply to /n_query with /n_info even when the node does not exists, insteaf of /fail, simply indicating that status by setting the node's parent group ID to -1

Yes, this basic idea occurred to us as well. It is possible this would break something though and recent practice (s_query) seems to err on the side of avoiding this where possible, even if it's unlikely.

@muellmusik
Copy link
Author

Hey @scztt!

It's common for API's, especially async ones or ones sensitive to timing / race conditions, to have a model where you assume existence / lack of existence when calling, and are notified if your assumptions are incorrect by an error response.

Which APIs are you thinking of? What's the rationale for not allowing a check where there's ambiguity?

@muellmusik
Copy link
Author

Could we include some more specific use-cases in this RFC, to make sure they're appropriate and covered by any eventual decision?

The specific context this arose from was using the VST plugin extensions with NodeProxies. It's complicated, but you can see the discussion here:

https://www.listarc.bham.ac.uk/lists/sc-users/msg69122.html

tldr: NodeProxies defer node creation to a scheduled event on their internal clock (I assume for quantisation purposes, but there may be other reasons). This makes the usual approach of wrapping code in a bundle with /sync messages impossible.

A NodeWatcher would seem to be the obvious goto for that problem, but since nodes are not watched by default, and server state is not necessarily tracked, it's not currently possible to use NodeWatcher with a node whose state cannot be inferred. This would also solve that problem more general problem.

Multi-client situations, which could include varying and/or not known client communities would also have uses for being able to query the existence of a node. A direct to server query would be faster, more reliable and less cumbersome than client to client queries. This functionality could help with some discovery applications, and with querying state of individual nodes after a crash, when language state is lost.

Finally, AFAIK there is no way to query a node's run state, which this RFC would also support, so perhaps further weight for an addition. I am fairly sure there would be other applications which I've not thought of.

@muellmusik
Copy link
Author

It sounds like there are cases in mind which would not be covered by this pattern, but all the cases I can think of seem like they could be handled by simply providing better error responses for existing APIs?

I agree ideally this could be handled by an /n_query with slightly different semantics, but changing the reply and adding values could cause breakage, including in alternative clients. Using the existing /fail really sucks because you have to parse the error string to confirm it's the node you were asking about, so I don't think that's a great approach even if the error post issue could be solved.

/g_queryTree is another bodgy option, but is inefficient and can fail, and to be honest I'd prefer an approach with clear semantics. It seems like pretty basic functionality.

@smrg-lm
Copy link

smrg-lm commented Jan 12, 2021

I just want to express my concern in case the server api grows with somehow redundant commands that slightly do the same thing justified by backward compatibility issues.

It already happens with /c_set and /c_setn (I guess because efficiency for the specific case of setting contiguous buses).

It's been years without modifications and now there is two in a row, it calls my attention. Small modification for specific cases without a global plan may only end up blowing the servers. It is harsh to say but I saw it in the library and yes, my complaints are also redundant.

@scztt
Copy link

scztt commented Jan 12, 2021

Which APIs are you thinking of? What's the rationale for not allowing a check where there's ambiguity?

Filesystem API's use a pattern of call-and-inspect-the-error over checking before hand to see if something will succeed. Web API's are often written like this as well, with an error response only when the requested state is not the case.

As far as rationale - for heavily dynamic and async systems like filesystems, web requests, or the SuperCollider audio graph, a pre-condition check at time N does not provide a strong guarantee that it will still be true later at N+1 when the actual request is made. The action still has to provide a proper error if it fails, and the calling code still has to handle the possibility that the action can fail - so the pre-condition check isn't providing any actionable information at all: it's redundant, and it encourages more complex calling conventions.

Just to re-iterate: I'm not at all against this API and I would probably use it for a few things :) - just want to make sure we have some concrete use-cases called out, to make sure they can't be solved by simply providing better error responses from the server, or expanding something like /n_query.

@josiah-wolf-oberholtzer
Copy link
Contributor

@smrg-lm I sympathize that there's no explicit plan for the API. It's grown organically over many years, and - as far as I can tell - there's no established best practices for versioning in OSC APIs like there are in JSON/RESTful APIs. Adding new commands when new functionality is required seems like the simplest option to me. I doubt that adding new commands is going to "blow the server" unless we start adding many orders-of-magnitude more commands. More technical debt? Yes. Performance degradation? Probably not.

There probably should be a long-term conversation about versioning the API, or about what the API would look like if we designed it today given everything we know about how people use scsynth. But that's another - and probably far more contentious - RFC 😆

@smrg-lm
Copy link

smrg-lm commented Jan 13, 2021

@josiah-wolf-oberholtzer it's ok, I do not oppose to the changes by themselves, I don't think they are bad, for me the key concept was "global plan" for the interface as a whole, as a concern in case of too many changes and not a rejection for specific changes. If it turns out that the API lacks important features or flexibility I think it would be better to update the API version, as with all software. But it happens that this is forbidden for compatibility and the problem is unsolvable.

@muellmusik
Copy link
Author

I just want to express my concern in case the server api grows with somehow redundant commands that slightly do the same thing justified by backward compatibility issues.

This is a valid concern. In an ideal world we would probably rethink some of the existing commands. I proceeded here on the basis of the discussion about /s_query, however, where the feeling seemed to be strongly that any risk of breakage was probably a deal-breaker. This included modifying replies by adding additional values to the end of messages (which is in many cases would harmless in sclang usage) as third party clients might make brittle assumptions about the format of replies.

If the preference would be to modify /n_query, we could do that, but if we're going to then I would suggest it would make more sense to include /s_query's added functionality as well. If we're going to risk breakage anyway, we may as well economise. See #5 for details. @brianlheim I think you were one of the people with a stronger view about modification. What do you think?

@muellmusik muellmusik mentioned this pull request Jan 13, 2021
@muellmusik
Copy link
Author

Filesystem API's use a pattern of call-and-inspect-the-error over checking before hand to see if something will succeed.

True to a point, but nevertheless:

[[NSFileManager defaultManager] fileExistsAtPath:filePath]
std::filesystem::exists(path);

and for that matter File.exists(path) in sclang.

a pre-condition check at time N does not provide a strong guarantee that it will still be true later at N+1 when the actual request is made.

Of course, but this is equally true of any of the approaches proposed here. In some cases you just need to check the status of something, not as a pre-condition to doing something to it directly and immediately, but just to sync state. Any direct work may be far in the future. The NodeWatcher case is a good example. I just want to watch this node but I can't do it properly without syncing. The watcher will capture /n_end messages so could deal with state changing before the reply.

We can do that currently with workarounds, but I'm fussy enough to like clear semantics, partly for feel, but also because it just makes it much easier for users to figure out how to do stuff.

Just to re-iterate: I'm not at all against this API and I would probably use it for a few things :)

No worries! Your query is completely reasonable and is exactly the kind of thing I'd raise. (I think I did on the s_query discussion.).

@muellmusik
Copy link
Author

If it turns out that the API lacks important features or flexibility I think it would be better to update the API version, as with all software. But it happens that this is forbidden for compatibility and the problem is unsolvable.

Tbh, I think this is a good idea, although maybe beyond the scope of this RFC. There'd be a potential compatibility hit with introducing the versioning, but that would diminish with time. We went through this with server plugins and the synthdef format, and I don't think those caused a lot of problems. (Though at the moment the supercollider target for the Faust online compiler seems to be broken for what might be a API-version related reason. So....) We might well be better off just taking the hit for the sake of better compatibility in the future.

@josiah-wolf-oberholtzer
Copy link
Contributor

I would have preferred that /s_query's functionality was an extension of /n_query's rather than a new command, but c'est la vie. In the absence of an explicit policy around API extension to existing commands, adding totally new commands is the safest approach.

A thought: one way you could justify extending existing commands is via the /version command. A client could query the server's /version immediately after connecting and use the response (or lack thereof) to configure how it interacts with the server, e.g. "the server is running 3.12.x therefore I know /n_query looks like X".

@muellmusik
Copy link
Author

A thought: one way you could justify extending existing commands is via the /version command. A client could query the server's /version immediately after connecting and use the response (or lack thereof) to configure how it interacts with the server, e.g. "the server is running 3.12.x therefore I know /n_query looks like X".

Yes, I think this is basically what has been suggested by others above. Probably it would be good to do, though it would cause some disruption initially.

@mossheim
Copy link
Contributor

@brianlheim I think you were one of the people with a stronger view about modification. What do you think?

i won't be working on SC for the rest of the month, sorry. i'll take a look at this when i have time.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

just looked this over, thanks for submitting it :)

it can be useful to check whether a node already or still exists. (A recent case arose in the VST interface).

could you please summarize the details of that so i and other readers can understand better why this is useful? without that, it's hard to evaluate the proposal. EDIT: i see you did that in a comment above, sorry i missed it. i'd appreciate it if you moved that into the proposal itself.

@brianlheim I think you were one of the people with a stronger view about modification. What do you think?

my slight preference is for the new command, again, for pretty much the same reasons as before.

It's been years without modifications and now there is two in a row, it calls my attention.

another way to interpret this might be that there were sorely-needed additions that went unnoticed because, until recently, we didn't have the right venue to discuss them.

i think there are some good conversations happening in this thread about the future of the API, and to me it's nice that these thoroughly-considered additions can serve as motivation for it.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

Filesystem API's use a pattern of call-and-inspect-the-error over checking before hand to see if something will succeed.

True to a point, but nevertheless:

[[NSFileManager defaultManager] fileExistsAtPath:filePath]
std::filesystem::exists(path);

and for that matter File.exists(path) in sclang.

i think this is a misunderstanding of @scztt 's point. a FS exists API call is useful because occasionally the mere existence or nonexistence of a file is useful information. it is still redundant at best to use this exists as a precondition check and write further API calls which assume the file exists / doesn't exist based on the response.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

one question, maybe naive, if this is the original problem:

NodeProxies defer node creation to a scheduled event on their internal clock

could another solution be to create the node immediately in a paused state and then defer running it?

@smrg-lm
Copy link

smrg-lm commented Feb 1, 2021

@brianlheim I agree with you and the aim of the proposals, just give them more time and analyze them all together, I trust you all. Feel free to delete this message is OT.

@telephon
Copy link
Member

telephon commented Feb 1, 2021

tldr: NodeProxies defer node creation to a scheduled event on their internal clock (I assume for quantisation purposes, but there may be other reasons). This makes the usual approach of wrapping code in a bundle with /sync messages impossible.

This is a separate issue, I wonder if we can find a good solution by using thisThread.clock, for example?

one question, maybe naive, if this is the original problem:

NodeProxies defer node creation to a scheduled event on their internal clock

could another solution be to create the node immediately in a paused state and then defer running it?

Not in the general case because what is scheduled may include processes running in sclang, like event streams. But it is an interesting idea, I hadn't thought of that possibility when I wrote this.

@muellmusik
Copy link
Author

i think this is a misunderstanding of @scztt 's point.

Not at all. I know what he's saying.

a FS exists API call is useful because occasionally the mere existence or nonexistence of a file is useful information.

Well that's 👆the point really, isn't it? Occasionally the mere existence or non-existence of a node is useful information.

The vst example is perhaps a bit esoteric (I've had other similar cases over the years), but I think the kludginess of the proposed workarounds demonstrates how awkward this is, especially with the existing fail behaviour.

A better and more generic example is just that it's impossible to correctly add a node for watching without knowing if the node exists. We may want to do this for any number of reasons (multiclient usages for example, but not only), and not immediately send other messages (or maybe never). Having to send an unrelated message, and expect the user to ignore a fail post, seems like a rather complicated and a little bit ugly way to deal with a simple task.

@muellmusik
Copy link
Author

could another solution be to create the node immediately in a paused state and then defer running it?

Thanks, though I think we already have something better than that. But @Spacechild1 and I felt this would be much easier with a straightforward way to confirm existence.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

Not at all. I know what he's saying.

OK, sorry for the misunderstanding

A better and more generic example is just that it's impossible to correctly add a node for watching without knowing if the node exists. We may want to do this for any number of reasons (multiclient usages for example, but not only), and not immediately send other messages (or maybe never).

i guess what i am missing is -- if someone needs to know at any given moment what set of nodes exist or don't exist, it would seem to me they should be building a complete client-side model of the server tree using n_query and checking that instead. IIUC that's within reach even for late joiners (or maybe not?). the motivation for needing to watch a single node but not the entire tree isn't totally clear... but then again i am not super familiar with this so you'll have to be patient with me. i do however see the use in the running/not-running part of this proposal, because that information can't be accessed by a late joiner any other way. but in that case, "node does not exist" is a failure condition like n_query.

@muellmusik
Copy link
Author

Thanks for your comments @brianlheim. No worries, and sorry if any of this seems impatient. :-) On balance, on looking at this and the /s_query proposal, I think my preference would be to solve both of those issues with a modified /n_query. I think @josiah-wolf-oberholtzer felt the same, if I'm not mistaken, but he can chime in here. This could be combined with an /api message (probably long overdue anyway), to help ameliorate any breakage. I think that risk is probably quite low, and there's something to be said for a cleaner api, but I know that's a point of controversy. Design-wise /n_query feels like where this belongs.

it would seem to me they should be building a complete client-side model of the server tree using n_query

Do you mean /g_query? /n_query could be used but would involve multiple round trips, exploring linked list style. /g_query is possible, but as noted elsewhere not totally reliable. For a simple case that seems excessive to me as well, particularly as you may have very large numbers of nodes you're not interested in.

It would be very handy however to have a way to snapshot and then watch all nodes on a server. Sometimes I think all nodes should be watched by default, (but that's another question, and you'd still need to sync in multi-client situations).

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

On balance, on looking at this and the /s_query proposal, I think my preference would be to solve both of those issues with a modified /n_query. I think @josiah-wolf-oberholtzer felt the same, if I'm not mistaken, but he can chime in here. This could be combined with an /api message (probably long overdue anyway), to help ameliorate any breakage. I think that risk is probably quite low, and there's something to be said for a cleaner api, but I know that's a point of controversy. Design-wise /n_query feels like where this belongs.

i don't have any issue with that, seems like a good idea to me. :)

Do you mean /g_query? /n_query could be used but would involve multiple round trips, exploring linked list style. /g_query is possible, but as noted elsewhere not totally reliable. For a simple case that seems excessive to me as well, particularly as you may have very large numbers of nodes you're not interested in.

I meant n_query; i remember from discussion on Josiah's previous RFC that g_query has size limitations. To me, maintaining the invariant that your local model matches the server's is quite simple and useful regardless of how many nodes you are interested in. But, I don't have any intuition for what would be a simple case, where you have very large numbers of nodes you're not interested in, that isn't served by a general-purpose track-all-nodes approach.

It would be very handy however to have a way to snapshot and then watch all nodes on a server.

unless i'm mistaken, that is entirely possible within the current OSC API, minus knowing whether or not a node is running; but certainly the actual logic to do that is pretty unwieldy because nodes may be reordered, added, removed while you are traversing the graph. two feasible ways i know of to make sync-up easier would be (1) snapshot-and-replay -- client requests a snapshot, snapshot is sent over (possibly in multiple chunks) while client queues up node notifications since snapshot began, then replays them on top of the received snapshot to arrive at the current graph state -- or (2) incremental replay -- at client's request, server streams a historical replay of every single notification generated since starting up, which the client can use to arrive at the current graph state. no idea if either of those are appropriate here, just brainstorming.

@muellmusik
Copy link
Author

i don't have any issue with that, seems like a good idea to me. :)

Ta! @josiah-wolf-oberholtzer should we start a new RFC about this?

I meant n_query; i remember from discussion on Josiah's previous RFC that g_query has size limitations. To me, maintaining the invariant that your local model matches the server's is quite simple and useful regardless of how many nodes you are interested in.

The problem I see with that is that if you need to do many (maybe 100s or more) round trips to construct it node by node, it is possible that nodes may be freed in the interim. This may include the next node that you query (say the after node in the n_info) which I think means you need to start again? Even if not, you need to watch and track all node messages in the interim, and then construct your accurate tree by running through all those changes.

Much nicer IMO would be a reliable /g_queryTree (sorry meant that not 'g_query'!) which executed synchronously replying with however many messages were required. Then you get the tree as a snapshot, and any changes should arrive after, so queuing not needed. (Yes I know there's a small chance UDP messages will arrive out of order over a WAN, but we've always said use TCP if that's a problem for you.) But maybe we're somehow talking past each other here?

But, I don't have any intuition for what would be a simple case, where you have very large numbers of nodes you're not interested in, that isn't served by a general-purpose track-all-nodes approach.

Perhaps it's just a style thing, but for better or worse we don't have the convention of tracking all nodes. We opt in on a per node basis. Given that, querying 100s or more nodes to find out about 1 seems like overkill. Though again, I'd be happy I think if all nodes were watched by default. (At least I think so! ;-)

unless i'm mistaken, that is entirely possible within the current OSC API, minus knowing whether or not a node is running; but certainly the actual logic to do that is pretty unwieldy because nodes may be reordered, added, removed while you are traversing the graph.

Yeah I think that's an accurate assessment.

@josiah-wolf-oberholtzer
Copy link
Contributor

Yes, /g_queryTree has size limitations as it tries to serialize the entire subtree rooted at that group; it will overflow on large graphs resulting in a "corrupted" response.

And yes, I would have preferred extending /n_query with synth key/value pairs rather than adding a new /s_query message, since that seems cleaner to me, and the backwards compatibility risk seems low for alternate clients. But I'm not that bothered if it has to be a separate message.

@josiah-wolf-oberholtzer
Copy link
Contributor

@muellmusik Happy to put in the time with you to solve this in a "future proof" way via a new RFC. I'm sure we can come up with something that doesn't pollute the API and also acknowledges that the API may shift over time 😄.

@Sciss
Copy link

Sciss commented Feb 1, 2021

The problem I see with that is that if you need to do many (maybe 100s or more) round trips to construct it node by node

You only need to do that once, if you have a multi-client situation where a client joins late. In that case, best would be a hand-shake between clients to perform an "atomic" client-side world building. After that initial step you just watch incremental updates using the regular node notifications which are cheap…

Much nicer IMO would be a reliable /g_queryTree (sorry meant that not 'g_query'!) which executed synchronously replying with however many messages were required.

Exactly; that would be an atomic notification.

@Sciss
Copy link

Sciss commented Feb 1, 2021

Also if you had a variant of /g_queryTree.reply that doesn't include node synth-def names and controls, you would fit more than 500 notes into one 8K reply message. How many nodes are we talking about in a common scenario?

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

Much nicer IMO would be a reliable /g_queryTree (sorry meant that not 'g_query'!) which executed synchronously replying with however many messages were required.

one q about this, wouldn't a synchronous reply open you up to the possibility of missing an audio callback if the graph is large?

The problem I see with that is that if you need to do many (maybe 100s or more) round trips to construct it node by node, it is possible that nodes may be freed in the interim. This may include the next node that you query (say the after node in the n_info) which I think means you need to start again?

you will receive an n_end notification which contains the ID of the next node in the group (if one exists) before getting the n_query failure response in that case; i haven't worked through all the cases but i think you can always make forward progress as long as you are processing other notifications while querying.

@Sciss
Copy link

Sciss commented Feb 1, 2021

one q about this, wouldn't a synchronous reply open you up to the possibility of missing an audio callback if the graph is large?

/g_queryTree is already synchronous. And whether you reply with one or ten messages shouldn't make a difference (I think), should be very cheap to populate those messages. Think of a bundle asking ten /n_info, that would be the same effort and is already supported. There is no blocking operation involved (as in reading from disk) or expensive stuff like expanding a /d_recv.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

i think it would be good to mock it up and validate under a stress test or two, to get a sense what the typical costs/limits are. couldn't hurt, especially if we are going to go and add it to the API. "measure twice, cut once" :)

@muellmusik
Copy link
Author

By synchronous I meant multiple replies if needed so all g_queryTree.replies go out before anything else. If it works that way, no need for fancy multi-client freeze logic and coordination, and no need to for tedious bookkeeping other messages while waiting for multi round-trip messages. It's a snapshot at an instant, and later messages modigy that snapshot. Simple. I like simple.

one q about this, wouldn't a synchronous reply open you up to the possibility of missing an audio callback if the graph is large?

Perhaps. But I think it would have to be truly huge unless you had major issues with network contention? I would have thought the load of g_queryTree should be fairly trivial compared to the synthesis load.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

ach, perhaps i got too used to the latency requirements in finance :)

@jamshark70
Copy link

if someone needs to know at any given moment what set of nodes exist or don't exist, it would seem to me they should be building a complete client-side model of the server tree using n_query and checking that instead.

In the past, when questions about "how do I know what's happening on the server?" have come up on the forum, I've said basically the same thing: client-side data structures should be responsible for this.

But I also believe it's unfair to incoming users (who are often not seasoned programmers) to expect them to build the client side mirror by themselves. Seasoned programmers have both the skills to build a data model for this and the experience to anticipate the need for it. "I never did any programming but SC looks cool and I want to try" users should not be expected to build a data model, and they don't realize that they need it until they're waist-deep and then it may be not much fun to retrofit a chunk of code for tracking.

Whether the problem is solved by querying the server or by providing a tracking model in the main library, I'm ok either way. But I'd like to be careful that we don't end up with "you should be responsible for your own nodes and the class library won't help you with this very much." That sets the entry bar higher for new users, which is a form of exclusion. Nobody explicitly wants the exclusion but "they should..." sorts of remarks accidentally lead in that direction. With that said, the ensuing discussion looks promising to me.

@mossheim
Copy link
Contributor

mossheim commented Feb 1, 2021

Yes, when I said this i was assuming that a general purpose client side model would be the responsibility of the core library. apologies if that wasn't clear...

@muellmusik
Copy link
Author

@muellmusik Happy to put in the time with you to solve this in a "future proof" way via a new RFC. I'm sure we can come up with something that doesn't pollute the API and also acknowledges that the API may shift over time 😄.

So I think what we're probably talking about here is

  1. Adding an optional flag to the request message to include synthdef names, and control key/value pairs. I would add run status to this as well, since as noted there's no way AFAIK to query if a node is paused.
  2. Change the reply so that a missing node is not a fail, and no FAIL message is posted. Missing nodes could be indicated by a special parent node ID for example.
  3. An /api command
  4. Anything else?

@muellmusik
Copy link
Author

Should I edit this rfc to reflect the adjusted direction, or make a new one?

@mossheim
Copy link
Contributor

mossheim commented Mar 4, 2021

i'm in favor starting a new RFC. most of this thread is not relevant to the new thing you want to propose. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants