Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors being swallowed? #307

Closed
mhart opened this issue Jun 25, 2014 · 38 comments
Closed

Errors being swallowed? #307

mhart opened this issue Jun 25, 2014 · 38 comments
Labels
guidance Question that needs advice or information.

Comments

@mhart
Copy link
Contributor

mhart commented Jun 25, 2014

The following code hangs for me (after outputting done):

var AWS = require('aws-sdk')
var ec2 = new AWS.EC2()

ec2.describeInstances(function() {
  console.log('done')
  throw new Error('wtf')
})

This was working fine (as in, the error would be thrown and the process would exit) last time I checked, which I think was 2.0.0-rc13.

This causes lots of problems for any scripts that are expecting throw to exhibit the default Node.js behaviour.

@mhart
Copy link
Contributor Author

mhart commented Jun 25, 2014

This is node v0.10.29 btw

@lsegal
Copy link
Contributor

lsegal commented Jun 25, 2014

Even though it's not fully baked, I would recommend using domains for error handling in general. The problem with throwing exceptions out of the complete callback is that we've gone back and forth on this a number of times with very odd edge cases on both sides of the coin. See #176 for some examples.

Because of the way the SDK is structured with retries, throwing an error in the event loop bubbles errors back up causing weird behaviors. As of 2.x the SDK tries to intelligently detect ReferenceErrors and SyntaxErrors and throw those out. Throwing any exception was problematic because it's not possible to tell if the error originated from the callback or if it was just passed through from a prior step.

That said, if you think you can implement a patch that gives us the best of both worlds by allowing errors to be thrown out without causing the SDK to attempt retrying of these errors / other weird behaviors, I would consider merging that. Handling these exceptions has not been an easy problem to solve, so the more help we can get on this, the better.

@mhart
Copy link
Contributor Author

mhart commented Jun 25, 2014

It's just very unusual in Node.js land for a client library to completely take over error handling like this. I certainly had no idea what was going on in my scripts for hours - they were just silently hanging and I had no idea why.

I'm not sure why you're trying to do anything with thrown exceptions - shouldn't they just be allowed to bubble up like normal (unless the consumer has explicitly wrapped their code in a domain)? I'm guessing this has been discussed elsewhere perhaps?

@mhart
Copy link
Contributor Author

mhart commented Jun 25, 2014

Personally I think the best behaviour would be to not doing anything special with exceptions at all - as is pretty standard Node.js practice.

There's a good overview of best practices here:

http://www.joyent.com/developers/node/design/errors

Unless I'm missing something, the only exceptions you should be seeing thrown (apart from the obvious JSON.parse case) are as a result of programmer errors.

If you can point me to the relevant sections of the aws-sdk that are catching exceptions, and/or the discussions you've had around why there are problems with retries, I'll try to chime in with what might be a better way forward...?

I can see on #176 you've said "Bubbling the exception up will actually cause it to get bubbled up send lifecycle event handler, which makes the request get retried" - so what happens if you just don't have this behaviour in the "send lifecycle event handler"? Just retry on callback errors, not thrown exceptions...? It seems very dangerous to be retrying on exceptions at all (ie, there could be resource leakage, etc).

Wadya reckon?

@mhart
Copy link
Contributor Author

mhart commented Jun 25, 2014

Argh, was just bitten by this again when I did ec2.waitFor('InstanceRunning', ... (as per the docs - will file a separate issue for this) and my whole app just silently froze. Looking into it, it seems to be aws-sdk throwing its own StateNotFoundError - but again, suuuuuuper hard to debug when the app just freezes with no output - no stack trace, nothing.

Anyway, I'm obviously flogging a dead horse here – I know you're not 100% happy with the current situation either, but just want to reiterate my strong -1 to the current behaviour – especially for command-line apps which don't expect to run in domains or anything like that – it all just feels very un-Node-like.

@lsegal
Copy link
Contributor

lsegal commented Jun 26, 2014

@mhart I definitely agree that this is not ideal behavior, unfortunately, attempting to provide "ideal behavior" in the past has caused regressions and was extremely hard to maintain, since there were two competing issues:

  1. Users want ALL (both operational and programmer) errors to propagate out of the events, but:
  2. Users don't want SDK (operational) errors to propagate out of events.

The problem here is it's difficult to differentiate an SDK operational error from a regular operational error. Not all of our operational errors come from a service, some come from a client, and some, due to our pluggable event system, could come from user provided operational errors.

In 2.x we made a compromise to throw only known programmer errors out of the events (ReferenceError, TypeError, etc.). We were previously trying to be intelligent about where events can propagate out, but that's where most of the regressions had been coming from.

That said, I agree with your pain-- the fact that certain errors can get trapped makes for a bad debugging story, so I'm going to spend some more time to see if I can re-introduce some of the error propagation behavior in terminal events (error, complete, success).

Just retry on callback errors, not thrown exceptions...? It seems very dangerous to be retrying on exceptions at all (ie, there could be resource leakage, etc).

Automatic retry with exponential backoff is one of the main features of the SDK. We definitely want to be retrying certain errors, especially those from services. That said, the SDK attempts to retry on retryable errors only. Those are clearly marked with the .retryable property on the error object-- but in order to support a robust set of third-party plugins, the SDK allows errors to bubble up through callback or thrown errors, because it may be that plugin X throws a retryable error rather than calling the callback (that plugin may be synchronous).

Just for reference, the portions of the SDK that handle event logic are in lib/request.js and lib/sequential_executor.js.

@mhart
Copy link
Contributor Author

mhart commented Jun 26, 2014

I'm still unclear though... Why are operational errors being thrown? As opposed to just returned in callbacks?

@mhart
Copy link
Contributor Author

mhart commented Jun 26, 2014

"it may be that plugin X throws a retryable error"... Well that's just bad practice. Plugin X shouldn't be doing that, and shouldn't be allowed to do that. It should just be throwing for programmer errors.

Surely in these cases aws-sdk should be dictating how third-party plugins are expected to behave?

@mhart
Copy link
Contributor Author

mhart commented Jun 26, 2014

I'm not saying anything new here btw - this has been standard Node.js advice for quite a while now:

https://groups.google.com/d/msg/nodejs/1ESsssIxrUU/5abyX25Dv2sJ

@lsegal
Copy link
Contributor

lsegal commented Jun 26, 2014

Well that's just bad practice. Plugin X shouldn't be doing that, and shouldn't be allowed to do that. It should just be throwing for programmer errors.

According to the "Error handling in Node.js" document you linked, functions can throw for operational errors:

The general rule is that a function may deliver operational errors synchronously (e.g., by throwing) or asynchronously (by passing them to a callback or emitting error on an EventEmitter), but it should not do both.

In the case of a synchronous plugin, that plugin is allowed to throw its errors. Specifically, something like:

JSON.parse synchronous bad user input operational throw try/catch

The SDK supports both synchronous and asynchronous plugins-- therefore, to follow the correct practices you linked above, we should be handling the programming model that both synchronous and asynchronous plugins might use. For example, a synchronous extractData plugin might be as simple as:

request.on('extractData', function(resp) {
  resp.data = JSON.parse(resp.httpResponse.body.toString());
});

The above plugin deserializes all data as JSON from the wire. Of course, JSON.parse could throw, in which case the SDK must be able to handle and propagate that error. More importantly, the SDK has to now detect that an error was raised during the request and not emit the 'success' event, but rather, propagate to the 'error' event instead ('complete' always executes, of course). This means at some level the SDK has to be able to catch synchronous errors and propagate them. If the SDK did not propagate this error, we would either have to ignore it or do a hard stop on the request, which would effectively crash the entire program for an operational error-- that IMHO is a much worse programmer experience, especially since that synchronous exception would not be catchable by an application, and something we've gotten reports about before due to similar regressions in this logic.

If you have other ideas on how this should behave, feel free to explain, but I have not seen any significant contradictions between the SDKs behavior and the best practices document you linked.

@mhart
Copy link
Contributor Author

mhart commented Jun 26, 2014

But JSON.parse is a pretty well known case that should be handled immediately by the consumer. I'd say that it should be doing this:

request.on('extractData', function(resp) {
  try {
    resp.data = JSON.parse(resp.httpResponse.body.toString());
  } catch (e) {
    request.emit('error', e) // Or give it more info
  }
});

@lsegal
Copy link
Contributor

lsegal commented Jun 26, 2014

The example above is very much a simplified example. What happens when the case is less well known?

request.on('extractData', function(resp) {
  resp.data = require('foolibrary')(resp.httpResponse.body.toString());
});

Can foolibrary throw? Maybe. Effectively you are asking all plugin authors to guard all code from any potential operational exceptions-- a list of exceptions which is not always knowable by the author. Note that this could even happen with async examples too. Practically speaking, that means all callbacks would have to be wrapped in a try/catch block just in case something bad happens. It also means that a single misbehaving plugin could break the entire request lifecycle by not properly guarding for synchronous errors. That's the experience we are trying to avoid. This is why the SDK wraps these callbacks on try/catch blocks on the user's behalf and correctly handles propagation in all cases so that extractData never bubbles up a throw synchronous unhandled exception. Exceptions always go through the 'error' event, which is a guarantee the SDK attempts to make. That allows SDK users to trust that if anything breaks in the middle of a request, it will come out on the 'error' event side and not accidentally throw.

Sidenote: the actual example would be:

request.on('extractData', function(resp) {
  try {
    resp.data = JSON.parse(resp.httpResponse.body.toString());
  } catch (e) {
    resp.error = e;
  }
});

Plugins should never manually emit events. Doing so could lead to the 'error' event being emitted N>1 times for a single response. It also doesn't play nice with the request lifecycle architecture.

@lsegal lsegal changed the title Errors being swallowed? By domains perhaps? Errors being swallowed? Jun 26, 2014
@lsegal
Copy link
Contributor

lsegal commented Jun 26, 2014

(modified this issue title to remove the domains reference)

@mhart
Copy link
Contributor Author

mhart commented Jun 26, 2014

I think it just comes down to me not understanding why aws-sdk is different from other Node.js modules in this regard? Especially other http client libraries (like request)?

The foolibrary example isn't particularly helpful because I don't really know what it is... Is it a module that aws-sdk is consuming? A well known module? Or is it a user-provided plugin?

I'm just not sure what case you're trying to solve for here - have you got a more realistic example of the issues you've encountered?

@lsegal
Copy link
Contributor

lsegal commented Jun 26, 2014

I think it just comes down to me not understanding why aws-sdk is different from other Node.js modules in this regard? Especially other http client libraries (like request)?

It is different in that events emitted by the SDK support asynchrony, something EventEmitter does not support. This is necessary to support a whole slew of use cases where plugins might want to modify a request prior to sending. We emit build, sign, and send events, but order matters, and so a plugin that modifies build with an asynchronous call needs a way to ensure that send will wait on this event to finish. The sign event is a great example of this, where we modify the request with an Authorization header that is signed with credentials, but those credentials might need to be sourced from an asynchronous source-- local HTTP for EC2 metadata, or even remote HTTPS for STS credentials.

Fundamentally, the request lifecycle in the SDK is architected as more of a state machine than a set of events. Each "state" (build, sign, send, http_, extract_, and terminal states) emits a similarly named event, but the event is only emitted to listeners during that state. Errors that occur in state X need to move forward to the terminal states or they would get lost / thrown synchronously, neither of which is a good experience.

The behavior here is done to provide robust support for all of our use cases-- supporting retry logic, asynchronous signing, plugins that modify the way a request can be built / deserialized. These are all behaviors that are exposed in an extensible way for plugin authors to take advantage of. A library like request is much more tightly coupled to a set of hardcoded expectations, and is therefore much less extensible. This is not necessarily a bad thing (a library like request does great for the goals it is meant to accomplish), but it would be much more difficult to support the features in the SDK without the architecture we've implemented.

The foolibrary example isn't particularly helpful because I don't really know what it is... Is it a module that aws-sdk is consuming? A well known module? Or is it a user-provided plugin?

I'm not entirely sure why it matters what the library is? Is there a different expectation for "well known" modules? I would imagine that require('foo').parse(stuff) should have the exact same usage as JSON.parse(stuff) proper, from a functional perspective, well known or not. To elaborate on the example though, foolibrary can be some other JSON, BSON, or data parser, and yes, it can be a user provided plugin. For simplicity, we can say the aws-sdk is not consuming it (just because if we were we would be testing the behavior), but it's certainly possible that the SDK could be at any point in the future. The idea is that "a plugin is a plugin is a plugin", and can be used in any context-- an application, a third-party library consumed by an application, or the sdk library itself. The way these plugins are written does not differ per context. That request.on(...) code can be added by anything that has access to the request. In fact, the SDK has other ways to add events to requests more generally, so you don't even necessarily need the request object.

I'm just not sure what case you're trying to solve for here - have you got a more realistic example of the issues you've encountered?

Hopefully the first paragraph above sets the stage a bit more. More realistic examples of the issues can be found right in the SDK itself. The SDK is just a series of plugins, many of which are wired up from event_listeners.js. For example, the actual serializer/deserializer protocols (like XML/JSON support) are just plugins that listen to 'build'/'extractData' events. Those plugins might depend on modules (like xml2js) which could run synchronously or asynchronously, and may or may not throw exceptions. Having to know the entire closure of dependencies that each plugin uses in order to detect whether we need to wrap that plugin in a try/catch block would be an extremely high maintenance overhead, so in order to simplify the experience, the event manager itself handles catching of those uncaught exceptions and moving the state machine to the error state.

Hope that explains things a little bit more. I'd be happy to talk more about how (and why) our architecture works the way it does. I touched on it at our re:Invent talk last year, but there's certainly more to talk about. Let me know if you're curious.

@mhart
Copy link
Contributor Author

mhart commented Jun 26, 2014

I'm not entirely sure why it matters what the library is? Is there a different expectation for "well known" modules? I would imagine that require('foo').parse(stuff) should have the exact same usage as JSON.parse(stuff) proper, from a functional perspective, well known or not.

Yeah, so JSON.parse is a function that is known to throw, and not just on programmer error – which is why best practice is to wrap it in try/catch. If another library has a well known use case like that (for example a parsing library), then I'd expect it to be wrapped in the same manner. To let these exceptions bubble up to some broader, higher level exception manager is just not how 99% of JavaScript libraries are written, so it's unexpected.

Having to know the entire closure of dependencies that each plugin uses in order to detect whether we need to wrap that plugin in a try/catch block would be an extremely high maintenance overhead

I'm just having a hard time understanding this. aws-sdk has 3 external dependencies. Only 2 now that you've removed agentkeepalive. I've used xml2js quite extensively and it's very straightforward to manage parse errors with. So I can't imagine you're talking about the dependencies of aws-sdk... So what are you referring to? Plugins that are written for the aws-sdk? If that's the case, then aws-sdk completely dictates how they should be written... If people are writing plugins that just throw exceptions in random places, I don't understand why it would be the job of aws-sdk to manage them...?

Let me try another tack: If you removed the current exception handling behaviour altogether in aws-sdk - if you just wrapped your JSON.parse and xml2js calls in try/catch blocks and returned or emitted the errors as is standard... what would happen? What would go wrong currently? What sort of issues would people post here on GitHub?

@lsegal
Copy link
Contributor

lsegal commented Jun 27, 2014

Yeah, so JSON.parse is a function that is known to throw, and not just on programmer error – which is why best practice is to wrap it in try/catch. If another library has a well known use case like that (for example a parsing library), then I'd expect it to be wrapped in the same manner.

  1. How does a developer know that a function is "known to throw"? Most synchronous functions with any amount of real behavior are "known to throw", which would imply that every developer should wrap all their synchronous code in a try/catch block?
  2. What happens if your code is a library in itself? Should you still catch those exceptions, or should you bubble them up? In that case, are you breaking the rule by not catching? Doesn't this mean the SDK is breaking the rule by not try/catching synchronous functions that can throw?
  3. If I use require('foolibrary').parse(body) but that library just happens to use JSON, should I expect that 'foolibrary' has caught all my exceptions (based on your assertion that JSON should be wrapped in try/catch), or is it also known to throw?
  4. More importantly, what happens when you get this contract wrong? The worst case here that this operational error turns into a best-practice failure by the SDK-- namely, we don't properly emit the 'error' event as users expect, and instead the program crashes.

Finally, by the same argument made in point 2, you could say that since synchronous plugins in the SDK are functions that are known to throw, therefore the SDK itself should be wrapping those calls in a try/catch block. By that interpretation, we are behaving consistently with your advice on "known to throw" functions.

If that's the case, then aws-sdk completely dictates how they should be written... If people are writing plugins that just throw exceptions in random places, I don't understand why it would be the job of aws-sdk to manage them...?

Because of the following best practice:

The general rule is that a function may deliver operational errors synchronously (e.g., by throwing) or asynchronously (by passing them to a callback or emitting error on an EventEmitter)

Since the SDK supports synchronous plugins, we support functions that follow best practices for said synchronous plugins, namely, throwing exceptions. I'm not sure why you're treating this as a bad thing? If the SDK supports synchronous plugins, we must support the practices that are associated with synchronous functions. Do you disagree with this?

It seems like if synchronous functions are often known to throw, and the SDK executes arbitrary synchronous functions, then it should be try/catching the exceptions thrown from these functions. Otherwise it may be that a user registered a listener, but the registration of that synchronous listener caused the SDK to fail its contractual obligation of emitting an error event-- is that the plugin's fault, or the SDK's fault? From a "good user experience" point of view, I'd think it's both, which is why we're trying to resolve the issue from our end.

@lsegal
Copy link
Contributor

lsegal commented Jun 27, 2014

Perhaps this extra (short) response will help clarify:

Because plugins run in the context of the SDK, we consider the SDK to be responsible for the execution of its plugins. Therefore, an unknown operational failure of a plugin should never cause the SDK to break its contract with users, namely (assuming no programming error) always emitting a valid terminal event for every request.

@mhart
Copy link
Contributor Author

mhart commented Jun 27, 2014

Well... I find that very unfortunate. I honestly just think you're making your lives a lot more complicated at the end of the day by trying to do things differently to everyone else. Having aws-sdk act differently to other Node.js modules just surprises the user.

You cut off my context with the fact that JSON.parse is known to throw and taking that to the nth degree - I said specifically not under programmer error – due to the fact that the parser is also acting as the validator. This whole idea you're pushing that JS libraries throw all these exceptions that are just operational errors just really doesn't hold water IMO. I certainly haven't seen them anyway - TBH synchronous parsers are the only case I can think of off hand.

Again, I really think we've got to move away from theoreticals and hypotheticals, it doesn't help much. So – has there actually been a case where a user plugin has caused issue here, or has the architecture been developed just in case that ever happens? If there were some concrete issues that have occurred, I really think it would make it easier to understand rather than arguments that boil down to "we want to save programmers from doing stupid things... at the expense of others".

@mhart
Copy link
Contributor Author

mhart commented Jun 27, 2014

Anyway anyway anyway, I've been writing too much on this too, clearly. I think we just disagree with the level of intervention that aws-sdk should be taking. For me, I find it frustrating, and it's doing things that I don't expect or understand. My vote's for not trying to do anything unusual with errors at all. But if that's not on the cards, then at least making sure they're not swallowed would be great!

@mhart
Copy link
Contributor Author

mhart commented Jun 27, 2014

If we want to get away from the broad issue of how to handle errors, and at least just focus on the specific issue I posted about – it boils down to not messing with callbacks that users have passed in.

If a user does this:

ec2.describeInstances(function() {
  throw new Error('wtf')
})

What good reason is there for aws-sdk to trap that exception?

I think the answer to that (if we keep it specifically about this case) will reveal whether there's a good case to consider this behaviour a bug or not.

@lsegal
Copy link
Contributor

lsegal commented Jun 27, 2014

@mhart I don't think the SDK should be trapping that exception. The issue here, and the related bug, is that:

ec2.describeInstances(function() {
  throw new Error('wtf')
})

Is short-hand for:

var req = ec2.describeInstances();
req.on('complete', function() {
  throw new Error('wtf')
});
req.send();

The problem is that we have general logic to pass these errors through-- ultimately that error actually gets passed through to the 'uncaughtException' state of the request (an internal state) which we then hand off to domains if they are hooked up. We used to attempt to detect errors raised from terminal events (complete, error, success) and raise those, but that's where we ran into edge cases. It's possible the detection could be improved.

I generally agree that we should be doing a better job here, I just want to make sure that we can figure out a stable way to do this that won't cause other regressions.

@mhart
Copy link
Contributor Author

mhart commented Jun 27, 2014

Perhaps it's just a white-list vs black-list case.

Maybe assume all errors are terminal unless they match a certain condition? (eg, it has a retryable property, or whatever – in fact, doesn't the SDK already do this...?)

@mhart
Copy link
Contributor Author

mhart commented Jun 27, 2014

I'd also argue that once you're in the complete state (or any state where you're calling the user's callback) that you want to pass back control to the user, so shouldn't be trying to wrap anything at that point...

@lsegal
Copy link
Contributor

lsegal commented Jun 27, 2014

@mhart I agree with that, which is why we previously threw errors that came from the complete state.

Thanks for the feedback. I will be working on this to see if there's a way to fix the issue and will reference this in any commits.

@mhart
Copy link
Contributor Author

mhart commented Jun 27, 2014

Great, thanks @lsegal !

lsegal added a commit that referenced this issue Jun 28, 2014
@lsegal
Copy link
Contributor

lsegal commented Jun 28, 2014

@mhart try the above commit and see how you like it.

@mhart
Copy link
Contributor Author

mhart commented Jun 28, 2014

Cool! So far so good! Everything seems to be working as expected for me... Just hope it's still working for everyone else :-)

@mhart
Copy link
Contributor Author

mhart commented Jun 28, 2014

I mean, the only thing that might make people confused is that the top of the stack is not actually where the user threw the error (because it's been caught and rethrown):

/Users/michael/github/aws-sdk-js/lib/sequential_executor.js:234
        throw err;
              ^
Error
    at Response.<anonymous> (/Users/michael/github/aws-sdk-js/test.js:5:9)
    at Request.<anonymous> (/Users/michael/github/aws-sdk-js/lib/request.js:354:18)
    ...

But apart from that minor point, all good 👍

@lsegal
Copy link
Contributor

lsegal commented Jun 28, 2014

@mhart I can probably fix the stack trace obfuscation, I just wanted to have them both flowing through the same sync/async code paths for maintenance reasons, but I can likely change that. The try/catch inside of the transition function is mostly a fail safe, so it shouldn't affect any event code. It may not even be needed.

@lsegal
Copy link
Contributor

lsegal commented Jun 28, 2014

Actually on second thought, that may not work-- we are always catching that exception by virtue of executing the terminal events through the same SequentialExecutor code, which is where the (initial) catch occurs. Removing that would require running terminal events through a completely different executor, or running the executor in a different context, which we could do, but I'm not sure it's worth the effort.

I wonder if it's possible to unroll the stack trace for easier debugging? If so, that would be the easier fix.

@mhart
Copy link
Contributor Author

mhart commented Jun 28, 2014

I think you can do it with Error.prepareStackTrace in v8 – whether you want to or not... can always experiment I guess

@lsegal
Copy link
Contributor

lsegal commented Jun 28, 2014

Actually I think I can get this working by moving the logic into the SequentialExecutor. Patch to follow shortly. This will also cover a potential edge case (where a second 'complete' event might wipe out the error from a previous event).

@lsegal
Copy link
Contributor

lsegal commented Jun 30, 2014

I backed out some of the code I was working on because the change was far too intrusive and broke the encapsulation of the SeqExecutor. Turns out that Node.js's throw behavior is a little different and doesn't preserve stack traces like V8 in Chrome. Given that this only seems to be a Node.js behavior, I may try to poke at Error.prepareStackTrace instead, as the current implementation works fine in other environments.

@lsegal
Copy link
Contributor

lsegal commented Jun 30, 2014

Update: the printing of the throw line is a Node.js customization that works irrespective of V8's stack trace API, so it looks like it won't be possible. I'm going to give up on customizing this printout since the behavior does not seem very reliable anyhow. Any time we would ever re-throw an error, this same problem would occur, which would mean the only way to preserve the original throw callsite would be to never catch and rethrow an error. I'm not sure that's feasible in all cases.

If there are no other issues with the above commits, this code should go out with our next release.

@mhart
Copy link
Contributor Author

mhart commented Jun 30, 2014

OK, no probs - it shouldn't be a big deal in any case I don't think - more cosmetic than anything

lsegal added a commit that referenced this issue Jul 8, 2014
@lsegal
Copy link
Contributor

lsegal commented Jul 8, 2014

This is in the v2.0.5 release, so I will mark this as closed.

@lock
Copy link

lock bot commented Sep 29, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
guidance Question that needs advice or information.
Projects
None yet
Development

No branches or pull requests

3 participants