-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GraphQL error doesn't have stack trace #4474
Comments
Oh, besides
Without properly identifying client errors we can receive a flood of errors... |
I have an issue somewhat related to this, I am seeing Resolver errors with a stack trace, but it is a masked stack trace. The Error type/stack/metadata is changed when being executed within a Resolver. This is particularly an issue for Sentry as a tool because Sentry groups reported exceptions based on their stack trace - therefore we might see various different types of errors being counted as the same error purely because they have the same stack trace. We do something like this: didEncounterErrors(ctx) {
for (const err of ctx.errors) {
Sentry.captureException(err.originalError);
}
});
}, The stack is exactly the same for every error coming through this event hook, therefore Sentry thinks they are all the same error. Original Error (Thrown by calling tedious lib within resovler)
Same error seen in didEncounterErrors
Not only do we lose the stack trace but we also lose the Error type (this prevent us from doing things like An Error manually thrown from a resolver and seen in didEncounterErrors
This is a completely different error but because the stack trace is exactly the same it gets reported as the same error in sentry Note: using |
Fyi, the way I got these to group in Sentry is by overriding the fingerprint as such: didEncounterErrors(ctx) {
Sentry.withScope(scope => {
const graphqlKind = ctx.operation.operation; // e.g. query
const operationName = ctx.operationName; // e.g. GetUsers
for (const err of ctx.errors) {
scope.setFingerprint([graphqlKind, operationName, err.originalError.message]);
Sentry.captureException(err.originalError);
}
});
}, But as mentioned above, it is still not ideal because the stack is not reported, therefore we won't be able to pinpoint exactly where this error was thrown from. |
Update on my issue, looks like this could be related to an issue in This workaround worked for me. In the end I managed to get the correct stack as below (note, the Error type is still lost).
|
any updates on this? thanks |
There's a lot of subtlety involved in Apollo Server error handling, and this issue (which was filed during an unfortunate and I sure hope one-time period in 2020 where Apollo Server was not being actively maintained) doesn't contain a full reproduction recipe that allows me to see the issue on my machine without taking creative steps. So I'm going to close this as lacking a reproduction. I note that some of the above comments seem to be about schema stitching (which is a graphql-tools project and not part of Apollo Server itself, and also Apollo Server 3 does use a newer version of graphql-tools now) and others don't so there might be multiple issues here. |
Apollo version
"apollo-server-express": "^2.16.1"
.Inside
formatError
I'm reporting unexpected errors like this:It works great for errors returned inside my resolvers, however, when the error is a parsing error because my GraphQL query was malformed:
graphqlError
doesn't have astack
property, so reporting it to a monitoring tool like Sentry is awkward because it only has an error name and message.ServerError
.So my questions are:
graphqlError
doesn't have astack
property?The text was updated successfully, but these errors were encountered: