-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update README.md #5
Conversation
|
||
In JavaScript one could create a top-level name `UncheckedError` that when thrown bypasses the default `catch` clause. Libraries can use `UncheckedError`s to differentiate between operational errors, like a network socket closing, and programmer errors like calling a function with too few arguments. | ||
|
||
In adition, one could allow applications to choose whether `ReferenceError`, `TypeError`, and other built in errors inherit from `Error` or `UncheckedError`. Perhaps even by varrying this behavior between production and development. It would be our recommendation to always inherit builtin errors from `UncheckedError` but it's easy to allow both. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slight typo: varrying
vs varying
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: addition
What use case did you have in mind re: allowing customization of the base classes? I think it might be worth elaborating a bit on that.
This LGTM! |
Relevant es-discuss thread https://esdiscuss.org/topic/try-catch-conditional-exceptions-in-light-of-generators cc @benjamingr |
It's also worth noting, cross realm (or npm dedupe, unless this is provided as a blessed singleton) can cause grief in a catch-guard style world (unless some sort of branding is used). Obviously solutions are possible, just these details needs to be taken into account. |
Yeah, this approach does not belong in JavaScript and will not be accepted into the standard. JavaScript as a language propagates all errors up the catchable call stack. This will not be changing. |
On the other hand, if someone wants to work on a compile-to-JS language that replaces various error conditions (e.g. divide by zero, access to undefined property, access to undeclared variable, concatenating a string and an object, array out of bounds access, etc.) with process.abort() calls (or the equivalent, of unchecked error throwing---which really seems to be just a fancy way to phrase aborting the process), this readme might serve as a good guidance for them. It's not JavaScript though. |
You say that as if JS isn't a language that evolves or changes... How is it not reasonable that a |
It's not reasonable for many reasons, which have been discussed many times in the creation of the exception handling models going all the way back to languages like Ada and Eiffel, threading through C++ and Java and C# in modern times. Here is the basic reasoning, but I'd suggest consulting the literature. Exceptions are used to signal something exceptional has happened, that can be recovered from. Even in the case of syntax errors or other bugs, it's possible to write programs that are resilient, e.g. if a rare code path is the only one that encounters it, you don't want to destroy your whole application, but instead alert that the action which caused you to go down that rare path (perhaps triggered by exceptional user input) cannot be completed. All exceptions in the language are recoverable; if they were not (e.g. out of memory errors or early syntax errors) they would simply cause a crash. Different programs have different thresholds for recovery, and the places where recovery can happen are signaled by catch blocks. Some applications, such as simple scripts, do not need to recover from errors; they often let errors bubble to the top of the call stack, whether they be synchronous or the asynchronous call stack created by promises, at which point they are reported. Other applications, such as servers, want to isolate errors that occur in a particular context and be able to process other call stacks concurrently. Those developers will wrap a Node has a meme that all its programs are simple scripts, and do not need any error recovery, but instead should just restart the process. This meme is against the grain of the language's error handling design, and indeed that of its predecessors back to the 70s. But that's fine---to opt out of it, you simply let errors bubble to the top of the (potentially asynchronous) call stack, and restart your process when that happens. You never use The promise translation of synchronous errors that occur into asynchronous contexts into asynchronous errors is just that: a translation. It does not change any of this, and certainly nobody's going to accept making that translation "lossy" by only translating some errors and leaving others by the wayside. That defeats the whole purpose of promises, which is to allow you to handle errors inside your asynchronous call stacks in a uniform fashion. If you wish to distinguish kinds of errors, it's totally fine to add an extra flag or use a type or whatever on your error object. I think that might be an interesting route for Node core to explore---an error hierarchy like Java or .NET or similar, with IOError and NetworkError and so on. But that will not affect the promise translation from synchronous error stacks into asynchronous error stacks. It can only affect how you decide to recover from errors (or not), at the specify synchronous |
@domenic While your feedback is appreciated, it strikes me that criticizing the approach at this point is perhaps premature. This is a germ of an approach — one which, if explored in a fuller context, may not ultimately imply a lossy translation between asynchronous and synchronous errors. Before judging this, I'd respectfully ask that you wait until we've written it up in a more complete fashion. Thank you! |
I hope you're tuned in to how exactly this kind of attitude helps deepen the sense division between TC39 and those on the out, particularly Node / systems folk who have no solid representation there. "Your opinions are not welcome, JavaScript is for the browser where different rules are at play." |
@rvagg I still believe browser vendors working on dev tools will want to be able to trap on unexpected (not this is not unhandled) errors and provide a rich debugging environment. I do not know if this is possible if the logic for handling the unexpected error occurs inside a catch block removed from the throw location. |
fwiw this doesn't read to me like a systems vs browser debate. |
@jfhbrook related to nodejs/post-mortem#16 where it's becoming pretty clear that one of the primary divisions on Promises is between a systems perspective and a higher level application & browser perspective towards dealing with errors. The only time I've encountered this kind of sentiment on the backend is in environments where startup is costly, i.e. the JVM:
Node has a pretty well established best practice around this and it does not involve keeping applications alive that have unpredictable and unknown states. |
I suppose you could say that "losing" an error is more dire in a server scenario, but I suspect that ultimately client-side developers run into similar problems, that avoiding this footgun isn't the hardest thing (bluebird's done works), and that it's just a trade-off between unhandled exceptions being guaranteed to crash the app and knowing that your errors are "caught" in one place. At least, that's how I see it. |
For context, is the heart of this issue the inability to perform accurate post-mortem on caught exceptions in Node? Because they lose the call stack and context as soon as they're caught? If so, would it be worth fixing that? (If that's possible) |
I can appreciate the thought "all errors (exceptions) are recoverable, but if not then will forcefully crash". In all honesty I enjoy the academic exercise of writing code that can always recover. Except that JS is very lax and it is easy to create side effects within the application. Even when the code runs smoothly. When a user explicitly throws an error it could be seen as a contract of, "we know the application is in a bad state and we have prepared it to continue in the case you decide to proceed". But when the VM throws an exception it may imply the author may not be aware of the issue and was unable to cleanup. Leaving an unknown state that may improperly affect how the application continues to run. The way around this is wrapping all code segments in try/catch blocks and preparing to cleanup in the case something goes wrong, but the reality is not everything is wrapped in a try/catch. Defensive programming would then logically dictate that in the case of an exception the author did not explicitly throw I assume the application is in a bad state and must be brought down. @chrisdickinson On the constructive side, I'd think it more practical to only allow VM thrown exceptions to forcefully bubble. Take for example the following: 'use strict';
const fs = require('fs');
const json = JSON.stringify({filename: process.argv[2]}) + 'a';
function readFilePromise(path) {
return new Promise(function(res, rej) {
fs.readFile(JSON.parse(json).filename, function(er, data) {
if (er) return rej(er);
return res(data);
});
});
} This will throw a These two cases demonstrate that it's most important to distinguish between exceptions the author does know about, and those it doesn't know about. In short I don't believe it would be a worthwhile endeavor to figure out how to forcefully bubble types of errors. Instead bubble depending on where those errors originate. EDIT: I'm mostly referring to:
Unless you're suggesting that VM thrown errors also inherit from |
@Blesh That's the 10,000ft view. The conversation that originally gave rise to this document was primarily around the usage of throw/try/catch as the primary mechanism for communicating operational (expected) errors. The concerns hinge around:
To be clear, these are problems that exist with the try/catch construct even without bringing asynchrony in the picture. I dare say, being able to differentiate between these two classes of errors is the primary reason most enterprise Node.js deployments prefer the use of callbacks as the primary channel for communicating operational errors, with throw being reserved for "something catastrophic happened, I don't know what to do, time to bring the process down". Unfortunately, without some support at the language level it seems unlikely that we'd be able to resolve all the above concerns. The subtext here is that abstractions like Promises, async/await, and I believe even Observables too (please correct me if I'm wrong, you'd know much better!) use throw/try/catch as their primary communication channel. Thus, one of the primary purposes of this document was to document and surface these concerns, such that we could begin a conversation around how to potentially address these at a language level (if at all possible). |
@DonutEspresso yes, Observables indeed try/catch errors in user-provided methods such as handlers or projection functions. It that regard, the original callstack (important for post-mortem) would be lost. They do, however, throw if the error is unhandled, but that's well after important context would have been lost. |
Allowing some errors to bypass |
Thanks for pinging me in @stefanpenner , I raised similar concerns at this esdiscuss thread a few years ago. In Bluebird, we have typed I think Node should go ahead and raise the issue of debugging asynchronous code with try/catch to the TC. We should also see what other languages that have adopted promises like Python did. This is a really important discussion to have. Of course, @domenic is also 100% right, there will never be no checked/unchecked exceptions in JavaScript, it has been discussed before and will not ever happen. It's simply impossible at this point. Things like compilation errors can and should be caught by static analysis tooling. What we need to work on instead is typed catches, predicate catches or pattern matching. This could be an opportunity for Node to step up and make a big positive impact on the language itself. I don't know where we currently stands but Node needs representation in TC39 and these concerns need to be raised 100%. They greatly influence how we write JS.
That's actually not typically the case with promises where recovery is simple and pretty trivial. If I find a rare syntax error path in a server written with promises then I can have my fun day running code that gives me 500 errors. If I do that in a regular "crash on error" node server I can effectively create a DOS myself by sending less than 100 requests per second to that path. Promises are a lot safer in that regard IMO. |
I tend to agree that VM-thrown errors should be |
I know it was just for example, but to be clear most @chrisdickinson That's decent. It would be a good start having a specified way to determine whether an exception's point of origin was the VM or not. |
Why is that? Also, what's the difference between "typed catches, predicate catches or pattern matching" and "Checked/Unchecked Errors"? With typed catches, you need to mark the error. Checked/Unchecked Errors would be a kind of mark? Why one is possible the other is not? Maybe it's for backward compatibility? |
An idea like this would address a lot of concerns. We don't have to change the semantics of "catch" though. We can just brand errors and use that to change the semantics of the error constructor. All we have to do is call c abort(); in the unchecked error constructor if abort-on-unchecked-error exists. As far as tc39 is concerned we just have to spec which errors are unchecked and how an application can write a "window.onuncheckedcreated" handler which for node can do whatever we want and for browsers can do whatever they want . |
I share the skepticism in this thread about the idea of skipping catch blocks for certain types of existing errors. Probably the most straightforward error which is a programmer error is SyntaxError in an eval. However, we (V8) for one take advantage of catching this extensively in our testing, and I wouldn't be surprised if the web depends on catching a SyntaxError from eval'ing an XHR result. There are a lot polyfill libraries, and they sometimes end up catching programmer errors in the course of feature detection (@zloirock @ljharb thoughts?). Ultimately, I'm not sure if whether an error is a programmer error or an application error is definable in a context-independent way with a brand, or if it depends on the kind of usage :(. Some sort of catch clause for only certain types of errors sounds like a good way forward for some of these issues--I think most code generally only wants to catch one type of error and are expecting a certain one, and all the others (whether a programmer error or not) should be propagated through. One particular concern about catches that are dependent on being an instance of a particular class representing certain exception types is whether I also want to clarify that TC39 certainly does make an attempt to think about non-browser embedding environments for Java^H^H^H^HECMAScript, though we could use your advice! If the Node community has clear consensus on a particular issue and this is communicated to committee members before a meeting, that can be taken into consideration when making decisions. |
@trevnorris this has come up again and again. I'm not sure how we can move this discussion forward, but there are two main points I'd like to stress:
Let me try and explain these two claims: An example that is usually given as proof that promises do the wrong thing is this one (taken from https://github.com/mafintosh) var pool = [stream1, stream2, stream3]
...
somePromise.then(function () {
// lets cycle the streams so we spread out the load
var s = pool.shift()
s.writee('hello') // oops i made a typo
pool.push(s)
return someAnotherPromise
}) The argument here is that promises will swallow up the error caused by the typo, resulting with the pool never reclaiming the stream. If this operation is repeated a few times, the pool of resources will be exhausted and the process will be alive but no longer be able to serve requests. This example illustrates the problem very well. I'd like to split the "Promises swallowing programmer errors = bad" concern into
All these concerns are valid. I want to show however that the problem are not Promises, and the solution is not to add uncatchable errors to the language. The problem is instead improper resource/state management and a fuzzy notion of programmer vs operational errors. And also that Promises provide guarantees that enable us to write code that is more robust. But first, let me get concern (1) out of the way: this is completely correct. In node, the unhandledRejection handler should by default log the error to stderr. Its possible to write promise code in a way that synchronously attaches an error handler in the vast majority of cases, and the tiny minority where it isn't is not important enough to lose debugging info. That aside, lets address the rest of the concerns Consider this modification of the original example: function doSomething(arg) {
var s = pool.shift();
// ...
// more code here
// ...
doSomeOperation(arg, (err, res) =>
if (err) return done(err)
// ...
// lots of more code
// ...
s.write(res)
pool.push(s)
})
} This example has a similar resource starvation problem. Here the user has failed to properly handle the error condition when running Its true that exceptions make the danger even less obvious. Any line may throw an error, and there could be many reasons why that error has been thrown. However, it can be very easy to miss the problem in the above code too. The chances to make the error may be smaller, but its definitely possible. The key takeaway here is that this is a resource and state management problem, not an error handling problem. We want a sure way of handling resources that guarantees that no matter what happens, the connection will be properly released to the pool. Is that possible? With promises it turns out that yes, indeed it is! Here is how we would do it with Bluebird: function getStream(pool) {
return Promise.resolve(pool.shift()).disposer(s => pool.push(s));
}
somePromise.then(_ => Promise.using(getStream(pool), s => {
// ... code
s.writee('hello');
})) The basic mechanism used in Bluebird is simple and described in the context managers and transactions article found at this link. More details about Bluebird's more advanced, safer implementation are available in the disposer method documentation of bluebird This mechanism only works because Promises catch all exceptions. No matter what happens within the passed closure, Which brings us to the 3rd concern: if you promisify a callback based library, wont this result with breakage and resource starvation? The answer is: yes, but only when the following statements are true:
Lets try and come up with such an example: Consider the function: var pool = [stream1, stream2, stream3]
// ...
function sendQuery(data, cb) {
// lets cycle the streams so we spread out the load
var s = pool.shift()
s.writee('begin;') // oops i made a typo
var totalData = "";
s.write(data)
s.on('data', function handleData(data) {
totalData += data;
if (queryProperlyHandled(totalData)) {
s.removeEventListener(handleData);
cb(null, totalData); totalData = "";
}
pool.push(s)
}
}) Just how many ways are here that things can go wrong? Too many to list! Even if we fixed the obvious typo that causes an exception, we have 2 potential problems:
So what will not promisifying this buy us? A process crash for the first problem, but nothing for the second one. It seems that crashes only accidentally sometimes help us with the problem. They don't really solve the problem of resource / state management in all cases. We still have the option to promisify the function in a "safe" way: when a sync error is caught calling In practice, however, most libraries immediately do something asynchronous, such as: function sendQuery(data, cb) {
// lets cycle the streams so we spread out the load
pool.acquireConnection(function(err, s) {
if (err) return done(err);
s.writee('begin;') // oops i made a typo
var totalData = "";
s.writee(data)
... more code
pool.release(s)
}
}) In these cases, the process will crash even if we promisify Finally, the 4th concern
I'd like to show that this concern, while valid is also misguided. The whole problem here is that an improper mechanism is used to signal programmer errors. We have to decide whether we want JS to be a dynamic language with powerful meta-programming facilities or not. This is not a black and white thing, so we also want to decide how far we're going to go. Are we going to allow the programmer to recover from calling a non-function as a function? Are we going to forbid recovery from attempting to access the property of a reference that points to undefined? Are we going to forbid access to non-existent object properties unless dictionary syntax is used? These are all valid questions. But its misguided to declare the entire exception mechanism as off-limits. Functions may throw for entirely different reasons. Take for example: function crop(image, x1, y1, x2, y2) { // throws if x1,y1,x2,y2 are out of bounds } Is this a good reason to throw? That depends on what your interpretation of "programmer error" is. This may or may not be an obvious programmer error - we may be able to handle the case where the crop fails. If we are passing the coordinates 3 layers down our API, do we really need to check at every layer whether the coordinates are out of bounds and throw an error, except the top-most layer where we do a different thing? (send a JSON response with the error). This quickly becomes unworkable, which is precisely the reason why JS has exceptions that bubble automatically. There are other ways to check for typos and similar errors. We have TypeScript and Flowtype, and both handle the most severe examples that are usually given (such as I'd also like to stress that promises introduce an entirely different programming model. This model is not directly compatible with node conventions. Unfortunately, it would seem that node conventions may be somewhat misguided and stem from missing solutions to its existing problems with resource management as well as blurring the distinction between exceptions and panics by using exceptions for both purposes in different situations. Fortunately, its possible to promisify buggy libraries that follow these conventions in a "safe" way. And I realize that this comment is way too long, but I wanted to show that us promise users are well aware of node conventions regarding errors and the concerns being raised. But we think we have a very good solution, and amending that solution will probably break it, without fixing node-related problems. |
There is a strong node core paradigm that invalid arguments are a programmer error. This is nothing new and is unlikely to change. |
Invalid arguments are not always trivial, of course
Where it's not obvious if it's a programmer error. These scenarios might even be:
Where it's a programmer error that is not even detected synchronously. Callbacks do an arguably worse job in these cases than promises where you'd have to manually propagate that. |
@groundwater to give an idea of how this general idea extends to other, maybe more complex cases. Consider a CSV parser: const data = await parseCSV(stream, {
unexpectedEof (line, recoveries) {
if (line.length === EXPECTED_LENGTH) {
return recoveries.injectString('\n') // Our CSV files don't always end in \n so we inject one to let it finish
}
}
}) Implemented like: stream.on('end', () => {
if (!parser.currentLine.complete && parser.restarts.unexpectedEof) {
parser.restarts.unexpectedEof(parser.currentLine, {
injectString (str) {
parser.push(str) // triggers more parsing
}
rejectLine () {
parser.data.pop()
parser.currentLine = parser.data[parser.data.length - 1]
}
})
}
if (parser.currentLine.complete) {
parser.data.push(currentLine)
parser.done()
}
}) This kinda extends the above to actually expose a recovery API to do specific things to "restart" the internal, mostly-opaque process. |
Why are we not proposing to fix try {
doSomething()
} catch (err) {
if (error instanceof SomeExpectedError) {
fixStuff();
} else {
throw new Error('bad things', err);
}
} Then in post-mortem folks could analyze the It also seems like something that would be easy to start doing as general practice in Node today. Just a custom Error with an |
@Blesh good point, there is https://github.com/davepacheco/node-verror module to add causes to errors |
@Blesh @llambda: The error trace isn't the sole thing that the post-mortem folks want to preserve. |
@chrisdickinson excellent description! We should probably add this clarification to the document. I have run into confusion between "stack" and "stack trace" with folks before. |
@chrisdickinson I share @groundwater's opinion, this is a great description, thank you very much for taking the time to write this! I've started writing a summary of the challenges of using promises with post-mortem debugging, including the description of a potential solution that I wanted to play with and that, to my knowledge, hasn't been described/presented yet. My goal is for that document to be used by anyone to get a clear understanding of the topic from the point of view of post-mortem debugging tools users/maintainers. It does not touch on the broader topic of the design of the promises' error model and how it could be changed/worked around/adapted on purpose. I know this is a discussion you've both been having for a while, and it's great, but it's not in the scope of that document. I'll post it to nodejs' post-mortem working group too, but I thought that you might be interested in parts of it for this repository. |
@misterdjules 💯 Excellent work! If this document is intended for promise users — most promise use usually happens within As a general note for readers: the |
Good point, I'll add an example that throws from within a
Which example is that? Anyway, I realize I should have submitted that as a separate issue to this repo, I don't want to derail the current discussion about this PR. @groundwater Please let me know whether you think it's worth it and I'll be happy to do so. |
@misterdjules Ah, sorry, I should have said "your final note on shortcomings" — specifically, that the behavior of the program changes with respect to In other words, it seems like we've already made it best practice to avoid hitting |
@Blesh we don't actually need an
That's a really interesting idea. While personally in my code I don't use I'm thinking about the following realistically (as for #16 and #18) for a user story with post mortem debugging:
Advantages:
Disadvantages:
|
Yeah, I was talking to @jhusain and he pointed that out. :\ They need the dump. |
Let's move the discussion on @zkat's "recovery object" to #6. @misterdjules this thread has wandered but it's been mostly constructive and beneficial. Your comments are always welcome! |
Hey @chrisdickinson here's a first draft. Let me know what you think.
8987503
to
4a2e9c5
Compare
Merged this mostly as-is but adding comments about the strong opposition from TC39 members. |
I don't believe discussion has been exhausted - any particular reason for the eager merge? |
Absolutely, excellent point. Is the
I'm not sure I understand what you mean. Do you mean that node should, by default, set an
Do you mean always rejecting on next tick so that any node program can setup a custom |
As the person that specifies It is a node-ism but browsers fire the same event - Node led here but browsers have added this event and now WHATWG specified it and Chrome 49 ships with it. It is inherently incompatible with post-mortem debugging. It waits for the microtask queue to flush before reporting the unhandled rejection which is useful for the event but not for this problem. That's fine, because it is built in Node on top of a lower level v8 hook that can be used to build the hook we need - I describe that here #5 (comment) I'll be happy to answer any related questions about it when I get to my PC. Mobile GitHub is bad. |
Thank you for the info, I didn't know that.
Just to make sure we're on the same page, I'm well aware of that, as I described in the document I linked to.
By "lower level v8 hook", do you mean Have you looked at the implementation of the potential solution I described earlier? How does that differ from what you have in mind? |
@misterdjules @chrisdickinson @groundwater I made a gist here: https://gist.github.com/benjamingr/3fb97eda723ff2a030d1 No implementation discussion - but basically this can be achieved by subclassing |
@benjamingr Thanks for sharing that. I commented in https://gist.github.com/benjamingr/3fb97eda723ff2a030d1#gistcomment-1695107.
Ah, that's interesting! Although in my implementation I had to make a change to how microtasks are run, so a change to V8 might still be needed.
I think the benefit in being able to rely on V8's implementation of promises is that it's a common denominator that the rest of the ecosystem can rely on when interoperability between modules using promises is needed, but I may have missed something. |
This README is a fantastic writeup of error handling. BTW, another subtlety with regards to checked exceptions:
At first it seems ok, consider e.g. However, if NoValue happened to be thrown by the |
@benjamingr: Hey — thanks for the write-up. In the original Promises PR I started to implement subclassing of the native Promise (plus replacing it.) It turns out that it's difficult without leaking the details of the subclassing, so I abandoned the approach in favor of making the VM behave as we'd like. In general, subclassing Promise is something I'd like to save as a "last resort", since it does begin to affect other use cases (for example, globally switching out the promise implementation, which is a window I'd like to leave open for users.) With the way that promises are implemented now, programmer errors — that is, bad parameters passed to the API — are rejected on the same tick as the promise constructor. This means they immediately become unhandled rejections. Operational errors are handed to the If I understand @misterdjules approach correctly, it gives us an avenue to produce adequate core dumps with a certain flag enabled, but it does not currently solve for "unhandled rejections should crash the program without a flag". I'd like to propose we proceed with this course of action:
This leaves us with:
Does this seem like a workable path? I realize it might be a bit contentious in that we're saying "don't rely on the ability to later handle a rejected promise", but that slots in with the current "de facto" rules of the ecosystem, since doing otherwise already means your package is making demands on its consumers, which is |
I'm going to leave all future replies on this thread. |
@benjamingr this discussion has diverged from the checked/unchecked PR long ago. I tried to summarize the points brought up, and am happy to modify the document if you have a suggestion in another PR. You're welcome to continue the general discussion here, but I don't see how the new API we're discussion would affect the merge. If people have other ideas on how to solve this, I encourage you to open a PR with a writeup. This document is meant to be inclusive of all ideas, and in the event an idea is unsound, we would like to still document why it's unsound. |
Hey @chrisdickinson here's a first draft. Let me know what you think.