Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #5

Merged
merged 1 commit into from
Feb 11, 2016
Merged

Update README.md #5

merged 1 commit into from
Feb 11, 2016

Conversation

groundwater
Copy link
Owner

Hey @chrisdickinson here's a first draft. Let me know what you think.


In JavaScript one could create a top-level name `UncheckedError` that when thrown bypasses the default `catch` clause. Libraries can use `UncheckedError`s to differentiate between operational errors, like a network socket closing, and programmer errors like calling a function with too few arguments.

In adition, one could allow applications to choose whether `ReferenceError`, `TypeError`, and other built in errors inherit from `Error` or `UncheckedError`. Perhaps even by varrying this behavior between production and development. It would be our recommendation to always inherit builtin errors from `UncheckedError` but it's easy to allow both.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slight typo: varrying vs varying

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: addition

What use case did you have in mind re: allowing customization of the base classes? I think it might be worth elaborating a bit on that.

@chrisdickinson
Copy link

This LGTM!

@stefanpenner
Copy link

@stefanpenner
Copy link

It's also worth noting, cross realm (or npm dedupe, unless this is provided as a blessed singleton) can cause grief in a catch-guard style world (unless some sort of branding is used). Obviously solutions are possible, just these details needs to be taken into account.

@domenic
Copy link

domenic commented Feb 9, 2016

Yeah, this approach does not belong in JavaScript and will not be accepted into the standard. JavaScript as a language propagates all errors up the catchable call stack. This will not be changing.

@domenic
Copy link

domenic commented Feb 9, 2016

On the other hand, if someone wants to work on a compile-to-JS language that replaces various error conditions (e.g. divide by zero, access to undefined property, access to undeclared variable, concatenating a string and an object, array out of bounds access, etc.) with process.abort() calls (or the equivalent, of unchecked error throwing---which really seems to be just a fancy way to phrase aborting the process), this readme might serve as a good guidance for them. It's not JavaScript though.

@trevnorris
Copy link

@domenic

ah, this approach does not belong in JavaScript and will not be accepted into the standard. JavaScript as a language propagates all errors up the catchable call stack. This will not be changing.

You say that as if JS isn't a language that evolves or changes... How is it not reasonable that a SyntaxError for a misspelled variable thrown from the VM directly couldn't be seen as something different than a user thrown exception?

@domenic
Copy link

domenic commented Feb 9, 2016

It's not reasonable for many reasons, which have been discussed many times in the creation of the exception handling models going all the way back to languages like Ada and Eiffel, threading through C++ and Java and C# in modern times. Here is the basic reasoning, but I'd suggest consulting the literature.

Exceptions are used to signal something exceptional has happened, that can be recovered from. Even in the case of syntax errors or other bugs, it's possible to write programs that are resilient, e.g. if a rare code path is the only one that encounters it, you don't want to destroy your whole application, but instead alert that the action which caused you to go down that rare path (perhaps triggered by exceptional user input) cannot be completed. All exceptions in the language are recoverable; if they were not (e.g. out of memory errors or early syntax errors) they would simply cause a crash.

Different programs have different thresholds for recovery, and the places where recovery can happen are signaled by catch blocks. Some applications, such as simple scripts, do not need to recover from errors; they often let errors bubble to the top of the call stack, whether they be synchronous or the asynchronous call stack created by promises, at which point they are reported. Other applications, such as servers, want to isolate errors that occur in a particular context and be able to process other call stacks concurrently. Those developers will wrap a catch (e) { } or .catch(e => { }) around the appropriate scope.

Node has a meme that all its programs are simple scripts, and do not need any error recovery, but instead should just restart the process. This meme is against the grain of the language's error handling design, and indeed that of its predecessors back to the 70s. But that's fine---to opt out of it, you simply let errors bubble to the top of the (potentially asynchronous) call stack, and restart your process when that happens. You never use .catch or catch (e) {}, assuming all errors are bugs.

The promise translation of synchronous errors that occur into asynchronous contexts into asynchronous errors is just that: a translation. It does not change any of this, and certainly nobody's going to accept making that translation "lossy" by only translating some errors and leaving others by the wayside. That defeats the whole purpose of promises, which is to allow you to handle errors inside your asynchronous call stacks in a uniform fashion.

If you wish to distinguish kinds of errors, it's totally fine to add an extra flag or use a type or whatever on your error object. I think that might be an interesting route for Node core to explore---an error hierarchy like Java or .NET or similar, with IOError and NetworkError and so on. But that will not affect the promise translation from synchronous error stacks into asynchronous error stacks. It can only affect how you decide to recover from errors (or not), at the specify synchronous catch (e) { } or asynchronous .catch(e => { }) recovery points you've chosen to program into your code.

@chrisdickinson
Copy link

@domenic While your feedback is appreciated, it strikes me that criticizing the approach at this point is perhaps premature. This is a germ of an approach — one which, if explored in a fuller context, may not ultimately imply a lossy translation between asynchronous and synchronous errors. Before judging this, I'd respectfully ask that you wait until we've written it up in a more complete fashion. Thank you!

@rvagg
Copy link

rvagg commented Feb 9, 2016

Yeah, this approach does not belong in JavaScript and will not be accepted into the standard. JavaScript as a language propagates all errors up the catchable call stack. This will not be changing.

I hope you're tuned in to how exactly this kind of attitude helps deepen the sense division between TC39 and those on the out, particularly Node / systems folk who have no solid representation there. "Your opinions are not welcome, JavaScript is for the browser where different rules are at play."

@groundwater
Copy link
Owner Author

@rvagg I still believe browser vendors working on dev tools will want to be able to trap on unexpected (not this is not unhandled) errors and provide a rich debugging environment. I do not know if this is possible if the logic for handling the unexpected error occurs inside a catch block removed from the throw location.

@jfhbrook
Copy link

jfhbrook commented Feb 9, 2016

fwiw this doesn't read to me like a systems vs browser debate.

@rvagg
Copy link

rvagg commented Feb 9, 2016

@jfhbrook related to nodejs/post-mortem#16 where it's becoming pretty clear that one of the primary divisions on Promises is between a systems perspective and a higher level application & browser perspective towards dealing with errors. The only time I've encountered this kind of sentiment on the backend is in environments where startup is costly, i.e. the JVM:

Even in the case of syntax errors or other bugs, it's possible to write programs that are resilient

Node has a pretty well established best practice around this and it does not involve keeping applications alive that have unpredictable and unknown states.

@jfhbrook
Copy link

jfhbrook commented Feb 9, 2016

I suppose you could say that "losing" an error is more dire in a server scenario, but I suspect that ultimately client-side developers run into similar problems, that avoiding this footgun isn't the hardest thing (bluebird's done works), and that it's just a trade-off between unhandled exceptions being guaranteed to crash the app and knowing that your errors are "caught" in one place. At least, that's how I see it.

@benlesh
Copy link

benlesh commented Feb 9, 2016

For context, is the heart of this issue the inability to perform accurate post-mortem on caught exceptions in Node? Because they lose the call stack and context as soon as they're caught? If so, would it be worth fixing that? (If that's possible)

@trevnorris
Copy link

Different programs have different thresholds for recovery

I can appreciate the thought "all errors (exceptions) are recoverable, but if not then will forcefully crash". In all honesty I enjoy the academic exercise of writing code that can always recover. Except that JS is very lax and it is easy to create side effects within the application. Even when the code runs smoothly. When a user explicitly throws an error it could be seen as a contract of, "we know the application is in a bad state and we have prepared it to continue in the case you decide to proceed". But when the VM throws an exception it may imply the author may not be aware of the issue and was unable to cleanup. Leaving an unknown state that may improperly affect how the application continues to run.

The way around this is wrapping all code segments in try/catch blocks and preparing to cleanup in the case something goes wrong, but the reality is not everything is wrapped in a try/catch. Defensive programming would then logically dictate that in the case of an exception the author did not explicitly throw I assume the application is in a bad state and must be brought down.

@chrisdickinson On the constructive side, I'd think it more practical to only allow VM thrown exceptions to forcefully bubble. Take for example the following:

'use strict';
const fs = require('fs');
const json = JSON.stringify({filename: process.argv[2]}) + 'a';

function readFilePromise(path) {
  return new Promise(function(res, rej) {
    fs.readFile(JSON.parse(json).filename, function(er, data) {
      if (er) return rej(er);
      return res(data);
    });
  });
}

This will throw a SyntaxError that is allowed to be caught and passed to the rejection callback (one of the few examples I've found in v8 that allows a SyntaxError to be caught, actually). Then say in contrast the user is parsing data and finds invalid input. In which case a SyntaxError can be explicitly thrown.

These two cases demonstrate that it's most important to distinguish between exceptions the author does know about, and those it doesn't know about. In short I don't believe it would be a worthwhile endeavor to figure out how to forcefully bubble types of errors. Instead bubble depending on where those errors originate.

EDIT: I'm mostly referring to:

In JavaScript one could create a top-level name UncheckedError that when thrown bypasses the default catch clause.

Unless you're suggesting that VM thrown errors also inherit from UncheckedError to also allow them to bubble. In that case nevermind what I've said.

@DonutEspresso
Copy link
Contributor

@Blesh That's the 10,000ft view.

The conversation that originally gave rise to this document was primarily around the usage of throw/try/catch as the primary mechanism for communicating operational (expected) errors. The concerns hinge around:

  • try/catch by its nature collapsing operational/programmer (expected/unexpected) errors into the same communication channel, which actually wouldn't be such a problem were it not for
  • the lack of real types in JS, and the lack of fine grained catch construct which make it near impossible to safely discern operational errors from programmer errors
  • assuming we can safely discern the two classes of errors, necessarily unwinding the stack to determine if something can be handled, or if it should be rethrown (bad for post-mortem)

To be clear, these are problems that exist with the try/catch construct even without bringing asynchrony in the picture. I dare say, being able to differentiate between these two classes of errors is the primary reason most enterprise Node.js deployments prefer the use of callbacks as the primary channel for communicating operational errors, with throw being reserved for "something catastrophic happened, I don't know what to do, time to bring the process down". Unfortunately, without some support at the language level it seems unlikely that we'd be able to resolve all the above concerns.

The subtext here is that abstractions like Promises, async/await, and I believe even Observables too (please correct me if I'm wrong, you'd know much better!) use throw/try/catch as their primary communication channel. Thus, one of the primary purposes of this document was to document and surface these concerns, such that we could begin a conversation around how to potentially address these at a language level (if at all possible).

@benlesh
Copy link

benlesh commented Feb 9, 2016

@DonutEspresso yes, Observables indeed try/catch errors in user-provided methods such as handlers or projection functions. It that regard, the original callstack (important for post-mortem) would be lost. They do, however, throw if the error is unhandled, but that's well after important context would have been lost.

@greim
Copy link

greim commented Feb 9, 2016

Allowing some errors to bypass catch feels like a drastic break from tradition. Maybe make it opt in? Just thinking out loud but try/check and Promise#check() which for example would only handle "checked" errors. Catch continues to work as usual.

@benjamingr
Copy link

Thanks for pinging me in @stefanpenner ,

I raised similar concerns at this esdiscuss thread a few years ago.

In Bluebird, we have typed catch clauses as well as predicate catch clause, I just ran a grep on my code base and I have exactly three general untyped .catch clauses in my code (almost everything is either a catch(IOError or catch(OperationalOnly, etc. So in my code all syntax errors are "thrown". I solve this at the promise library level.

I think Node should go ahead and raise the issue of debugging asynchronous code with try/catch to the TC. We should also see what other languages that have adopted promises like Python did. This is a really important discussion to have.

Of course, @domenic is also 100% right, there will never be no checked/unchecked exceptions in JavaScript, it has been discussed before and will not ever happen. It's simply impossible at this point. Things like compilation errors can and should be caught by static analysis tooling.

What we need to work on instead is typed catches, predicate catches or pattern matching. This could be an opportunity for Node to step up and make a big positive impact on the language itself. I don't know where we currently stands but Node needs representation in TC39 and these concerns need to be raised 100%. They greatly influence how we write JS.

@trevnorris

The way around this is wrapping all code segments in try/catch blocks and preparing to cleanup in the case something goes wrong, but the reality is not everything is wrapped in a try/catch.

That's actually not typically the case with promises where recovery is simple and pretty trivial. If I find a rare syntax error path in a server written with promises then I can have my fun day running code that gives me 500 errors. If I do that in a regular "crash on error" node server I can effectively create a DOS myself by sending less than 100 requests per second to that path. Promises are a lot safer in that regard IMO.

@chrisdickinson
Copy link

@trevnorris:

Unless you're suggesting that VM thrown errors also inherit from UncheckedError to also allow them to bubble. In that case nevermind what I've said.

I tend to agree that VM-thrown errors should be UncheckedErrors. What's in the document now deliberately leaves whether VM errors inherit from UncheckedError a bit fuzzy — the idea being that it could be application-configurable (which would address @greim's concern.) I think this actually ends up looking a lot like @domenic's proposal here — that VM-thrown errors should be "marked" differently than normal errors, with the addition that they interact differently with catch clauses in a configurable fashion, form the core of the idea here. (Thanks @benjamingr, @stefanpenner for linking that conversation here!)

@trevnorris
Copy link

@benjamingr

If I find a rare syntax error path in a server written with promises then I can have my fun day running code that gives me 500 errors.

I know it was just for example, but to be clear most SyntaxErrors won't give you a chance to send a 500. It's an instant application crash. Also just because code is wrapped in a Promise does that mean the application can frolic along after a VM exception. It completely depends on your library's ability to clean up after itself. The issue is side effects and unknown state. Unless we trust our source to cleanup properly every time then the application must be brought down, and if the VM threw an error that bubbled to us then it's unlikely that the library was able to clean up after itself.

@chrisdickinson That's decent. It would be a good start having a specified way to determine whether an exception's point of origin was the VM or not.

@stevemao
Copy link

stevemao commented Feb 9, 2016

@benjamingr

it has been discussed before and will not ever happen. It's simply impossible at this point.

Why is that?

Also, what's the difference between "typed catches, predicate catches or pattern matching" and "Checked/Unchecked Errors"? With typed catches, you need to mark the error. Checked/Unchecked Errors would be a kind of mark? Why one is possible the other is not?

Maybe it's for backward compatibility?

@Raynos
Copy link

Raynos commented Feb 9, 2016

An idea like this would address a lot of concerns.

We don't have to change the semantics of "catch" though. We can just brand errors and use that to change the semantics of the error constructor. All we have to do is call c abort(); in the unchecked error constructor if abort-on-unchecked-error exists.

As far as tc39 is concerned we just have to spec which errors are unchecked and how an application can write a "window.onuncheckedcreated" handler which for node can do whatever we want and for browsers can do whatever they want .

@littledan
Copy link

I share the skepticism in this thread about the idea of skipping catch blocks for certain types of existing errors. Probably the most straightforward error which is a programmer error is SyntaxError in an eval. However, we (V8) for one take advantage of catching this extensively in our testing, and I wouldn't be surprised if the web depends on catching a SyntaxError from eval'ing an XHR result. There are a lot polyfill libraries, and they sometimes end up catching programmer errors in the course of feature detection (@zloirock @ljharb thoughts?).

Ultimately, I'm not sure if whether an error is a programmer error or an application error is definable in a context-independent way with a brand, or if it depends on the kind of usage :(. Some sort of catch clause for only certain types of errors sounds like a good way forward for some of these issues--I think most code generally only wants to catch one type of error and are expecting a certain one, and all the others (whether a programmer error or not) should be propagated through.

One particular concern about catches that are dependent on being an instance of a particular class representing certain exception types is whether instanceof is appropriate at all, given npm semantics. The claim I've heard is that even with new versions of npm de-duplicating imports of the same version, you may have multiple dependencies of different versions of the same library, causing an instanceof check to fail and your catch guard not having the intended behavior. This concern has led to skepticism of basing new language features on instanceof, even though that's the most obvious basis for a guard.

I also want to clarify that TC39 certainly does make an attempt to think about non-browser embedding environments for Java^H^H^H^HECMAScript, though we could use your advice! If the Node community has clear consensus on a particular issue and this is communicated to committee members before a meeting, that can be taken into consideration when making decisions.

@spion
Copy link

spion commented Feb 9, 2016

@trevnorris this has come up again and again. I'm not sure how we can move this discussion forward, but there are two main points I'd like to stress:

  1. The fact that thrown errors crash a process does not guarantee that resources have been properly cleaned up in all circumstances
  2. When writing code that uses Promises, there is a technique whereby you can actually guarantee that a resource has been cleaned up, even in the face of thrown errors, in all circumstances.

Let me try and explain these two claims:

An example that is usually given as proof that promises do the wrong thing is this one (taken from https://github.com/mafintosh)

var pool = [stream1, stream2, stream3]

...

somePromise.then(function () {
  // lets cycle the streams so we spread out the load
  var s = pool.shift()
  s.writee('hello') // oops i made a typo
  pool.push(s)
  return someAnotherPromise
})

The argument here is that promises will swallow up the error caused by the typo, resulting with the pool never reclaiming the stream. If this operation is repeated a few times, the pool of resources will be exhausted and the process will be alive but no longer be able to serve requests.

This example illustrates the problem very well. I'd like to split the "Promises swallowing programmer errors = bad" concern into 3 4 concerns:

  1. Promises don't show the exception / stack trace anywhere. This prevents the developer from diagnosing the problem
  2. Exceptions can bubble up the call chain. This can hide the source of the error as we will need to inspect the entire call stack
  3. Since promises catch exceptions, programmer errors within a callback library can result with resource starvation / invalid state when promisified.
  4. It should not be possible to handle some exceptions using try/catch. Calling a non-existent method is one such example.

All these concerns are valid. I want to show however that the problem are not Promises, and the solution is not to add uncatchable errors to the language. The problem is instead improper resource/state management and a fuzzy notion of programmer vs operational errors. And also that Promises provide guarantees that enable us to write code that is more robust.

But first, let me get concern (1) out of the way: this is completely correct. In node, the unhandledRejection handler should by default log the error to stderr. Its possible to write promise code in a way that synchronously attaches an error handler in the vast majority of cases, and the tiny minority where it isn't is not important enough to lose debugging info.

That aside, lets address the rest of the concerns

Consider this modification of the original example:

function doSomething(arg) {
  var s = pool.shift();
  // ...
  // more code here
  // ...
  doSomeOperation(arg, (err, res) =>
    if (err) return done(err)
    // ...
    // lots of more code
    // ...
    s.write(res)
    pool.push(s)
  })
}

This example has a similar resource starvation problem. Here the user has failed to properly handle the error condition when running doSomeOperation. As a result, the connection is never released back to the pool. Even though try/catch is no longer in the picture, we still have the same resource management problem.

Its true that exceptions make the danger even less obvious. Any line may throw an error, and there could be many reasons why that error has been thrown.

However, it can be very easy to miss the problem in the above code too. The chances to make the error may be smaller, but its definitely possible.

The key takeaway here is that this is a resource and state management problem, not an error handling problem. We want a sure way of handling resources that guarantees that no matter what happens, the connection will be properly released to the pool. Is that possible?

With promises it turns out that yes, indeed it is! Here is how we would do it with Bluebird:

function getStream(pool) {
  return Promise.resolve(pool.shift()).disposer(s => pool.push(s));
}

somePromise.then(_ => Promise.using(getStream(pool), s => {
  // ... code
  s.writee('hello');
}))

The basic mechanism used in Bluebird is simple and described in the context managers and transactions article found at this link. More details about Bluebird's more advanced, safer implementation are available in the disposer method documentation of bluebird

This mechanism only works because Promises catch all exceptions. No matter what happens within the passed closure, pool.push(s) will always be called. Only if the push itself fails (by throwing or even returning a rejected promise), then bluebird will crash the process. Its now impossible to forget to clean up.

Which brings us to the 3rd concern: if you promisify a callback based library, wont this result with breakage and resource starvation?

The answer is: yes, but only when the following statements are true:

  1. the library must have a bug
  2. the bug must be synchronous, i.e. the typo/exception must be thrown in the same execution context as the line where the resource is acquired.

Lets try and come up with such an example:

Consider the function:

var pool = [stream1, stream2, stream3]
// ...
function sendQuery(data, cb) {
  // lets cycle the streams so we spread out the load
  var s = pool.shift()
  s.writee('begin;') // oops i made a typo
  var totalData = "";
  s.write(data)
  s.on('data', function handleData(data) {
    totalData += data;
    if (queryProperlyHandled(totalData)) { 
      s.removeEventListener(handleData);
      cb(null, totalData); totalData = ""; 
    }
    pool.push(s)
  }
})

Just how many ways are here that things can go wrong? Too many to list! Even if we fixed the obvious typo that causes an exception, we have 2 potential problems:

  1. now that pool.push() is now done on the next tick, we need to check if pool.shift() returned a null value and handle that case by waiting for a new connection.
  2. We also don't handle the case where the stream has ended - we need to create a new connection and add it to the pool, and so on.

So what will not promisifying this buy us? A process crash for the first problem, but nothing for the second one. It seems that crashes only accidentally sometimes help us with the problem. They don't really solve the problem of resource / state management in all cases.

We still have the option to promisify the function in a "safe" way: when a sync error is caught calling sendQuery, its rethrown out-of-band using setImmediate. This will preserve the library's expectation that it wont be called within try ... catch. Which brings us to another point: its not generally safe to directly use callback-based functions within the then callback of a promise-based function.

In practice, however, most libraries immediately do something asynchronous, such as:

function sendQuery(data, cb) {
  // lets cycle the streams so we spread out the load
  pool.acquireConnection(function(err, s) {
    if (err) return done(err);
    s.writee('begin;') // oops i made a typo
    var totalData = "";
    s.writee(data)
    ... more code
    pool.release(s)
  }
})

In these cases, the process will crash even if we promisify sendQuery. Promises cannot catch exceptions that are more than 1 level deep within a callback library. This means we don't need the special safe wrapper.

Finally, the 4th concern

It should not be possible to handle some exceptions using try/catch. Calling a non-existent method is one such example.

I'd like to show that this concern, while valid is also misguided. The whole problem here is that an improper mechanism is used to signal programmer errors.

We have to decide whether we want JS to be a dynamic language with powerful meta-programming facilities or not. This is not a black and white thing, so we also want to decide how far we're going to go. Are we going to allow the programmer to recover from calling a non-function as a function? Are we going to forbid recovery from attempting to access the property of a reference that points to undefined? Are we going to forbid access to non-existent object properties unless dictionary syntax is used? These are all valid questions.

But its misguided to declare the entire exception mechanism as off-limits. Functions may throw for entirely different reasons. Take for example:

function crop(image, x1, y1, x2, y2) { // throws if x1,y1,x2,y2 are out of bounds }

Is this a good reason to throw? That depends on what your interpretation of "programmer error" is. This may or may not be an obvious programmer error - we may be able to handle the case where the crop fails. If we are passing the coordinates 3 layers down our API, do we really need to check at every layer whether the coordinates are out of bounds and throw an error, except the top-most layer where we do a different thing? (send a JSON response with the error). This quickly becomes unworkable, which is precisely the reason why JS has exceptions that bubble automatically.

There are other ways to check for typos and similar errors. We have TypeScript and Flowtype, and both handle the most severe examples that are usually given (such as s.writee above).

I'd also like to stress that promises introduce an entirely different programming model. This model is not directly compatible with node conventions. Unfortunately, it would seem that node conventions may be somewhat misguided and stem from missing solutions to its existing problems with resource management as well as blurring the distinction between exceptions and panics by using exceptions for both purposes in different situations. Fortunately, its possible to promisify buggy libraries that follow these conventions in a "safe" way.

And I realize that this comment is way too long, but I wanted to show that us promise users are well aware of node conventions regarding errors and the concerns being raised. But we think we have a very good solution, and amending that solution will probably break it, without fixing node-related problems.

@Fishrock123
Copy link

Is this a good reason to throw? That depends on what your interpretation of "programmer error" is. This may or may not be an obvious programmer error

There is a strong node core paradigm that invalid arguments are a programmer error. This is nothing new and is unlikely to change.

@benjamingr
Copy link

@Fishrock123

There is a strong node core paradigm that invalid arguments are a programmer error. This is nothing new and is unlikely to change.

Invalid arguments are not always trivial, of course (null).foo() is an obvious programmer error, but @spion's examples were things like:

function crop(image, x1, y1, x2, y2) { // throws if x1,y1,x2,y2 are out of bounds }

Where it's not obvious if it's a programmer error. These scenarios might even be:

function crop(image, x1, y1, x2, y2, cb) { // returns cb(err) }

Where it's a programmer error that is not even detected synchronously. Callbacks do an arguably worse job in these cases than promises where you'd have to manually propagate that.

@zkat
Copy link

zkat commented Feb 10, 2016

@groundwater to give an idea of how this general idea extends to other, maybe more complex cases. Consider a CSV parser:

const data = await parseCSV(stream, {
  unexpectedEof (line, recoveries) {
    if (line.length === EXPECTED_LENGTH) {
      return recoveries.injectString('\n') // Our CSV files don't always end in \n so we inject one to let it finish
    }
  }
})

Implemented like:

stream.on('end', () => {
  if (!parser.currentLine.complete && parser.restarts.unexpectedEof) {
    parser.restarts.unexpectedEof(parser.currentLine, {
      injectString (str) {
        parser.push(str) // triggers more parsing
      }
      rejectLine () {
        parser.data.pop()
        parser.currentLine = parser.data[parser.data.length - 1]
      }
    })
  }
  if (parser.currentLine.complete) {
    parser.data.push(currentLine)
    parser.done()
  }
})

This kinda extends the above to actually expose a recovery API to do specific things to "restart" the internal, mostly-opaque process.

@benlesh
Copy link

benlesh commented Feb 10, 2016

Why are we not proposing to fix Error to support an innerError property and argument, similar to what Java and .NET have done? It would eliminate some of the problems of post-mortem debugging would it not?

try {
  doSomething()
} catch (err) {
  if (error instanceof SomeExpectedError) {
    fixStuff();
  } else {
    throw new Error('bad things', err);
  }
}

Then in post-mortem folks could analyze the innerError property of the error and get the original call stack. Even more interesting, they could get call stack information for error origination points deep in dependencies if the dependencies are implemented properly.

It also seems like something that would be easy to start doing as general practice in Node today. Just a custom Error with an innerError property, after all.

@gx0r
Copy link

gx0r commented Feb 10, 2016

@Blesh good point, there is https://github.com/davepacheco/node-verror module to add causes to errors

@chrisdickinson
Copy link

@Blesh @llambda: The error trace isn't the sole thing that the post-mortem folks want to preserve. abort() freezes the state of the program at the time of the call and serializes it into a file on disk that can be manually inspected using gdb, lldb, or mdb. The serialized stack will contain parameter info — the arguments each stack frame was called with. Additionally, it is ideal for the serialization to happen at the time the unexpected state was entered, so it's possible to make the assertion that nothing changed between the error and serialization. While the error object could be decorated with private-to-VM stack parameter info, we still wouldn't be able to make guarantees about the time delta between entering the error state and serializing the program. By the time the error object is intercepted, the stack has been unwound and we have "left" the unexpected state.

@groundwater
Copy link
Owner Author

@chrisdickinson excellent description! We should probably add this clarification to the document. I have run into confusion between "stack" and "stack trace" with folks before.

@misterdjules
Copy link

@chrisdickinson I share @groundwater's opinion, this is a great description, thank you very much for taking the time to write this!

I've started writing a summary of the challenges of using promises with post-mortem debugging, including the description of a potential solution that I wanted to play with and that, to my knowledge, hasn't been described/presented yet.

My goal is for that document to be used by anyone to get a clear understanding of the topic from the point of view of post-mortem debugging tools users/maintainers. It does not touch on the broader topic of the design of the promises' error model and how it could be changed/worked around/adapted on purpose. I know this is a discussion you've both been having for a while, and it's great, but it's not in the scope of that document.

I'll post it to nodejs' post-mortem working group too, but I thought that you might be interested in parts of it for this repository.

@chrisdickinson
Copy link

@misterdjules 💯 Excellent work! If this document is intended for promise users — most promise use usually happens within .then() or .catch handlers — that is, a typical Promise user would not expect to use new Promise((resolve, reject) => {}) during normal use. The new Promise is usually only found at the source of asynchrony, which in this case, is usually Node. Another note: the behavior of your last example with regards to --abort-on-uncaught-exception is, I think, actually pretty much ideal — at the moment, programs written in Node using promises can't expect that the unhandledRejection handler installed by the top-level application user won't shut down the process, so this just reinforces the existing "don't rely on unhandledRejection to solve race conditions" behavior.

As a general note for readers: the recovery object proposed here is a layer separate from Promise's. It's a Node-API level concern which shouldn't affect the Promise spec, instead giving users who prefer to avoid exceptional control flow when using async/await a method to do so. Changing the spec of promises with regards to error handling is unlikely to be possible without breaking existing JavaScript code. Given the lengths that TC39 has gone to avoid that outcome in the past, it seems unlikely that we'd be able to make breaking changes to Promise for our purposes (I could, of course, be missing something, though!)

@misterdjules
Copy link

@chrisdickinson

If this document is intended for promise users — most promise use usually happens within .then() or .catch handlers — that is, a typical Promise user would not expect to use new Promise((resolve, reject) => {}) during normal use. The new Promise is usually only found at the source of asynchrony, which in this case, is usually Node.

Good point, I'll add an example that throws from within a .then() handler.

Another note: the behavior of your last example with regards to --abort-on-uncaught-exception

Which example is that?

Anyway, I realize I should have submitted that as a separate issue to this repo, I don't want to derail the current discussion about this PR. @groundwater Please let me know whether you think it's worth it and I'll be happy to do so.

@chrisdickinson
Copy link

@misterdjules Ah, sorry, I should have said "your final note on shortcomings" — specifically, that the behavior of the program changes with respect to --abort-on-uncaught-exception, aborting with the flag on and continuing with the flag off. This parallels the existing situation in the ecosystem — as a package author, if your code relies on unhandledRejection, you don't know if the application-level consumer of your package has chosen to install something like process.on('unhandledRejection', () => process.abort()).

In other words, it seems like we've already made it best practice to avoid hitting unhandledRejection unless absolutely necessary, since package authors don't know if consumers of their package will choose to crash the program on unhandledRejection. Given that any promise-using program that leans on unhandledRejection can be rewritten to avoid doing so, it seems like maybe we should standardize that crashing behavior? (Maybe also changing Node's copy of Promise.reject to reject on next tick to avoid turning that method into "automatically crash the process"?)

@benjamingr
Copy link

@Blesh we don't actually need an innerError for similar reasons since stack traces in JS are generated at the error creation site. It would be great to have for lots of other things and I think everyone agrees node needs better and more descriptive errors.


@chrisdickinson

Given that any promise-using program that leans on unhandledRejection can be rewritten to avoid doing so,

That's a really interesting idea.

While personally in my code I don't use unhandledRejection for anything except actual errors which means I can throw in it (causing an abort) - this could definitely break peoples' flow.

I'm thinking about the following realistically (as for #16 and #18) for a user story with post mortem debugging:

  • Have an --abort-on-sync-unhandled-rejection-once flag similar to what @Raynos suggested (abort-on-unhandled-exception-once). Like him, I'm not sure if this falls in scope.
  • That terminates the process with a heap dump on any rejection on a promise if the rejection handler is not added synchronously. This effectively means "has not been already added" since then callbacks run after the catch handlers have been added if those are added synchronously. This requires a little stricter guarantee than the slack we give in unhandledRejection, in practice I just checked in my code and my code and some packages I use behave well.
  • Any following rejections don't trigger an abort - so services remain highly available. Resource cleanup gets performed and the process remains in a consistent state. An engineer updates the code and fixes the issue and the process is run again.

Advantages:

  • Terminating it synchronously means we get a decent heap dump, post-mortem people are happy.
  • Promise code isn't changed, promises users are happy.
  • Provides a way to write highly available NodeJS code, fixing the issue described in Responsibly aborting in node nodejs/post-mortem#18 .
  • Very small API surface, a single flag is added.

Disadvantages:

  • "Misbehaving" promise code can't be used, this flag requires people to write their promise code in a certain way if they want post-mortem debugging.
    • It might be possible to mitigate that through starting to warn on unhandledRejection although that won't always fix it.
  • Doesn't give us post-mortem debugging for every request but rather only the first, I think this is the way forward since otherwise you're exposed to a DOS attack.

@benlesh
Copy link

benlesh commented Feb 11, 2016

@Blesh @llambda: The error trace isn't the sole thing that the post-mortem folks want to preserve. abort() freezes the state of the program at the time of the call and serializes it into a file on disk that can be manually inspected using gdb, lldb, or mdb.

Yeah, I was talking to @jhusain and he pointed that out. :\ They need the dump.

@groundwater
Copy link
Owner Author

Let's move the discussion on @zkat's "recovery object" to #6.

@misterdjules this thread has wandered but it's been mostly constructive and beneficial. Your comments are always welcome!

Hey @chrisdickinson here's a first draft. Let me know what you think.
@groundwater groundwater merged commit 4a2e9c5 into master Feb 11, 2016
@groundwater groundwater deleted the groundwater-patch-1 branch February 11, 2016 17:51
@groundwater
Copy link
Owner Author

Merged this mostly as-is but adding comments about the strong opposition from TC39 members.

@benjamingr
Copy link

I don't believe discussion has been exhausted - any particular reason for the eager merge?

@misterdjules
Copy link

@chrisdickinson

Ah, sorry, I should have said "your final note on shortcomings" — specifically, that the behavior of the program changes with respect to --abort-on-uncaught-exception, aborting with the flag on and continuing with the flag off. This parallels the existing situation in the ecosystem — as a package author, if your code relies on unhandledRejection, you don't know if the application-level consumer of your package has chosen to install something like process.on('unhandledRejection', () => process.abort()).

Absolutely, excellent point. Is the unhandledRejection event part of the promises standard? It doesn't seem like it, so I imagine it's a Node-ism. Is that Node-ism accepted by the broader community of promises users?

Given that any promise-using program that leans on unhandledRejection can be rewritten to avoid doing so, it seems like maybe we should standardize that crashing behavior?

I'm not sure I understand what you mean. Do you mean that node should, by default, set an unhandledRejection handler that aborts?

(Maybe also changing Node's copy of Promise.reject to reject on next tick to avoid turning that method into "automatically crash the process"?)

Do you mean always rejecting on next tick so that any node program can setup a custom unhandledRejection event handler in case the default that aborts is not suitable for their use case?

@benjamingr
Copy link

As the person that specifies "unhandledRejection" for node (with the help of much smarter people who will likely refuse to participate at this point) let me clarify.

It is a node-ism but browsers fire the same event - Node led here but browsers have added this event and now WHATWG specified it and Chrome 49 ships with it.

It is inherently incompatible with post-mortem debugging. It waits for the microtask queue to flush before reporting the unhandled rejection which is useful for the event but not for this problem.

That's fine, because it is built in Node on top of a lower level v8 hook that can be used to build the hook we need - I describe that here #5 (comment)

I'll be happy to answer any related questions about it when I get to my PC. Mobile GitHub is bad.

@misterdjules
Copy link

@benjamingr

It is a node-ism but browsers fire the same event - Node led here but browsers have added this event and now WHATWG specified it and Chrome 49 ships with it.

Thank you for the info, I didn't know that.

It is inherently incompatible with post-mortem debugging. It waits for the microtask queue to flush before reporting the unhandled rejection which is useful for the event but not for this problem.

Just to make sure we're on the same page, I'm well aware of that, as I described in the document I linked to.

That's fine, because it is built in Node on top of a lower level v8 hook that can be used to build the hook we need - I describe that here #5 (comment)

By "lower level v8 hook", do you mean SetPromiseRejectCallback or something else? In your description I don't see anything that describes the implementation, which is essential to determining if it would suit the needs of post-mortem debugging users.

Have you looked at the implementation of the potential solution I described earlier? How does that differ from what you have in mind?

@benjamingr
Copy link

@misterdjules @chrisdickinson @groundwater I made a gist here: https://gist.github.com/benjamingr/3fb97eda723ff2a030d1

No implementation discussion - but basically this can be achieved by subclassing Promise (which is supported) rather than changing the v8 internal code. If we change v8's code we might as well use a much faster and lean promise library.

@misterdjules
Copy link

@benjamingr Thanks for sharing that. I commented in https://gist.github.com/benjamingr/3fb97eda723ff2a030d1#gistcomment-1695107.

this can be achieved by subclassing Promise (which is supported) rather than changing the v8 internal code

Ah, that's interesting! Although in my implementation I had to make a change to how microtasks are run, so a change to V8 might still be needed.

If we change v8's code we might as well use a much faster and lean promise library.

I think the benefit in being able to rely on V8's implementation of promises is that it's a common denominator that the rest of the ecosystem can rely on when interoperability between modules using promises is needed, but I may have missed something.

@gx0r
Copy link

gx0r commented Feb 11, 2016

This README is a fantastic writeup of error handling.

BTW, another subtlety with regards to checked exceptions:

var val = foo.get(...)
if (val instanceof NoValue) {
  // no value returned
} else {
  // we have a value!
}

At first it seems ok, consider e.g. .get( independentFunction() ), because we expect NoValue to be potentially thrown by the .get method.

However, if NoValue happened to be thrown by the independentFunction() (maybe it's 3rd party code that changed with a dependency version bump), our catch clause would inadvertently handle it.

@chrisdickinson
Copy link

@benjamingr: Hey — thanks for the write-up. In the original Promises PR I started to implement subclassing of the native Promise (plus replacing it.) It turns out that it's difficult without leaking the details of the subclassing, so I abandoned the approach in favor of making the VM behave as we'd like. In general, subclassing Promise is something I'd like to save as a "last resort", since it does begin to affect other use cases (for example, globally switching out the promise implementation, which is a window I'd like to leave open for users.)

With the way that promises are implemented now, programmer errors — that is, bad parameters passed to the API — are rejected on the same tick as the promise constructor. This means they immediately become unhandled rejections. Operational errors are handed to the recovery object, and if not handled, they will become rejections.

If I understand @misterdjules approach correctly, it gives us an avenue to produce adequate core dumps with a certain flag enabled, but it does not currently solve for "unhandled rejections should crash the program without a flag". I'd like to propose we proceed with this course of action:

  1. When a native promise rejects, if no user-installed unhandledRejection handler is installed, we crash the program. Currently, package authors can't rely on their consumers not to install a crash-on-unhandled-rejection handler, so it's already best to avoid relying on unhandledRejection behavior.
  2. We patch Promise.reject on Node startup to reject-on-next-tick so that it remains a useful API.
  3. As in @misterdjules approach, we use a flag (either ---abort-on-uncaught-exception or --abort-on-unhandled-rejection) to signal to V8 to abort immediately when no user-installed-handlers are installed.

This leaves us with:

  1. Performance-concerned users may swap global.Promise = require('bluebird') if desired.
    1. Application-level users who wish to disable the promise API entirely may set global.Promise = null.
  2. async/await users may use the recovery object to avoid using exceptions as control flow for operational rejections.
  3. async/await users may catch programmer errors using try / catch.
  4. Promise users may continue to use exceptions as flow control for operation errors by omitting the recovery parameter.
  5. Application-level users leaning on unhandledRejection currently may restore current behavior by not using --abort-on-* and installing a unhandledRejection handler.
    1. Package-level users doing this were always running the risk of their program not working, this just makes that behavior explicit.
  6. Post-mortem users may enable the --abort-on-* flag and receive expected cores from async/await and Promise users.
    1. I think this even extends to synchronous rejection of the Promise constructor (correct me if I'm wrong.)
    2. This should also cover cases in async/await where a user causes an exception by await-ing a core API call multiple async functions deep, so long as there are no try/catch clauses in the stack.

Does this seem like a workable path? I realize it might be a bit contentious in that we're saying "don't rely on the ability to later handle a rejected promise", but that slots in with the current "de facto" rules of the ecosystem, since doing otherwise already means your package is making demands on its consumers, which is not good a risky practice.

@chrisdickinson
Copy link

I'm going to leave all future replies on this thread.

@groundwater
Copy link
Owner Author

@benjamingr this discussion has diverged from the checked/unchecked PR long ago. I tried to summarize the points brought up, and am happy to modify the document if you have a suggestion in another PR. You're welcome to continue the general discussion here, but I don't see how the new API we're discussion would affect the merge.

If people have other ideas on how to solve this, I encourage you to open a PR with a writeup. This document is meant to be inclusive of all ideas, and in the event an idea is unsound, we would like to still document why it's unsound.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.