-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
service/dap: support running requests asynchronously #2423
Conversation
It seems to me that this approach would lead to a lot of repeated code. Wouldn't it be easier to handle as much as possible of this inside (*Server).handleRequest? For example:
I think this is also less error prone. For example, it seems to me, that this PR will accept two consecutive ContinueRequests and the second one will block until the first one terminates and then start immediately. Is that the intended behavior? If at some point you decide to actually serve requests that happen while the target is running, for example if you want to make SetBreakpointsRequest:
the stop and resume logic can be handled inside the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My current recommendation is to maintain the existing handleRequest structure (before this PR), but have a single separate, dedicated goroutine that processes the remaining synchronous operations sequentially.
Looking into Debugger.Command implementation, I found resumeNotify
is closed after acquiring d.targetMutex
and Debugger.Command
will still hold the lock until it returns. That means, even when the subsequent DAP request and the Debugger.Command
or Debugger.Goroutines
to serve the request runs in a separate goroutine, the goroutine will be blocked waiting for d.targetMutex.Lock
and the handleRequest loop will be blocked on resumeRequestLoop
.
So, completely going asynchronous looks difficult to me.
Moreover, there is no ordering guarantee when multiple goroutines are fighting over a mutex. I am not sure if that's desirable or not yet, but if we can preserve the ordering by enforcing those operations in a single goroutine, it would be nice and more readable.
Having only a bounded number of goroutines is also good when we need to do cleanup.
Btw I observed that onEvaluateRequest
still runs in the Run goroutine after this PR, but if it's for call
, this calls Debugger.Command
. I expect the same for onSetVariableRequest
too, and wonder if they are safe to run in the Run goroutine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that this approach would lead to a lot of repeated code. Wouldn't it be easier to handle as much as possible of this inside (*Server).handleRequest?
In theory, if the client is doing its job, most of the requests would never arrive while things are running. But I agree that it is safer and simpler to just handle all this in bulk for everything.
I think this is also less error prone. For example, it seems to me, that this PR will accept two consecutive ContinueRequests and the second one will block until the first one terminates and then start immediately. Is that the intended behavior?
This is the behavior in the rpc server, so I assumed that I should do the same at least to start with :) You can observe this with vscode. The continue button will be disabled while continuing, but you can still trigger it from the keyboard. Another way to do this is to have to clients with --accept-multiclient. We discussed this here: #2322.
But yes, I agree that we should just reject anything that doesn't make sense explicitly.
I looked into this and decided that it doesn't make sense. I added comments to HandleRequest, explaining why. Please let me know if I misunderstood something in your proposal.
You are right. This one is a bit more nuanced than others, so I wanted to deal with it separately once we figure out the design with the simpler items. There is a related TODO under call that's also waiting for async support. But specifically about the issue you are asking about - yes the call will block processing any new requests. I think that's fine for assignments because they should be quick. But it is not fine if you call some long-running function because you won't be able to pause or even disconnect. Besides making it asynchronous, we would also need to issue a Continue event to let the editor know that we are executing, so it updates the UI accordingly. And in that case, I will need to add a proper stopped event when things stop. |
The failure is related and I am looking into it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for a speedy review.
FYI I fixed the failing test, but do not fully understand how loopprog that has an infinite loop would terminate on halt. I was not able to reproduce this on either Mac or Linux machines that I have. Only saw this via TeamCity. |
Seems strange. |
// TODO(polina): do this for setting breakpoints | ||
// --3-- Handle such requests asynchronously and let them block until | ||
// the process stops or terminates (e.g. using a channel and a single | ||
// goroutine to preserve the order). This might not be appropriate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned, the requests received prior to Stop/Pause could be cancelled based on the request sequence number if the clients already don't have sent Cancel requests (and the server supports request cancellation).
When a synchronous request is received, we can send a sequence number to a channel ('ignore up to this point'), holt the debugger, send the request to the goroutine so the worker goroutine finishes the remaining work. When the worker goroutine gets out of an asynchronous request, it first checks the pause channel to receive the sequence number, drain the request channel up to the sequence number, perform the synchronous work, and if necessary, resume the program execution... So, I don't think stale requests aren't an inherent issue.
But mine is just a rough sketch and devils are in details. So I am fine with any approach as long as it works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to reread this a couple of times to fully grasp what you are proposing. This can always be a TODO to be addressed in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the educational comments.
// TODO(polina): do this for setting breakpoints | ||
// --3-- Handle such requests asynchronously and let them block until | ||
// the process stops or terminates (e.g. using a channel and a single | ||
// goroutine to preserve the order). This might not be appropriate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to reread this a couple of times to fully grasp what you are proposing. This can always be a TODO to be addressed in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this work.
LGTM
@aarzilli @derekparker It would be great if this can be merged soon so VS Code Go extension can handle Delve DAP debug session teardown cleanly. Thanks for the review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Currently the server processes one request at a time synchronously. That means that while continue (or next, or call, etc) are running, he server will not respond to any additional requests. This change updates the server to start a separate goroutine for running requests and resume the request loop to accept other requests. This will allow us to process requests for pause, setting breakpoints and disconnecting while the session is running, and return explicit errors for requests that cannot be processed while running (e.g. evaluating expressions).
Updates #1515
Updates golang/vscode-go#1347