-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Investigating org slug already set to a different value #45134
Conversation
from what we want to set it to. | ||
""" | ||
with configure_scope() as scope: | ||
if scope._tags and tag_key in scope._tags and scope._tags[tag_key] != expected_value: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main difference with #45039 is that we also check for the tags being different ( scope._tags[tag_key] != expected_value
).
src/sentry/utils/sdk.py
Outdated
"This is an intended error for investigating what is the root of this problem." | ||
) | ||
except Exception as e: | ||
logger.exception(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I create Sentry errors is because I get access to the logs.
It would be great if I could make expected errors not to show up as "red" like real errors.
It seems that we're tagging events with information about organizations even when it's impossible to have any org info. We have events tagged with the following info even when it is not possible: - org slug - org id - org context The only code that sets these three pieces of information is within the sdk.py module (see #45134 for details). This issues causes finding errors that are for the wrong organization when using `organization.slug`, thus, wasting time for engineers and customer support. It is unclear if the root cause comes from a bug in the SDK or the way we tag things in our code base. This change only clears the invalid tag values to reduce the impact of this issue rather than fixing the root issue. A sample event can be seen [here](https://sentry.sentry.io/discover/sentry:99c8b9c9d2c241cc90e52c307faa45eb/?field=organization.slug&field=user.display&field=timestamp&name=Integration.DoesNotExist%3A+Integration+matching+query+does+not+exist.&project=1&query=issue%3ASENTRY-W7T&sort=-timestamp&statsPeriod=90d&yAxis=count%28%29) (private link) Here's a [query](https://sentry.sentry.io/discover/results/?field=transaction&field=count_unique%28organization.slug%29&field=count%28%29&name=Integration.DoesNotExist%3A+Integration+matching+query+does+not+exist.&project=1&query=%21organization.slug%3A%22%22+error.value%3A%22Integration+matching+query+does+not+exist.%22&sort=-transaction&statsPeriod=14d&yAxis=count%28%29) showing some transactions that should not have org values set. There may be more transaction since I'm narrowing the query down to just `error.value:"Integration matching query does not exist."` Improves [WOR-2464](https://getsentry.atlassian.net/browse/WOR-2464)
} | ||
logger.warning(f"Tag already set and different ({tag_key}).", extra=extra) | ||
# This can be used to find errors that may have been mistagged | ||
scope.set_tag("possible_mistag", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I believe would help most.
I will investigate if the errors are real tomorrow. I will also deploy it if not. |
PR reverted: fb648cc |
I reverted because it produced too many errors. It proves the point that it is happening and that's the most important part of this PR. |
* master: (79 commits) feat(perf-issues): Add performance issue detection timing runner command (#44912) Revert "chore: Investigating org slug already set to a different value (#45134)" fix(hybrid-cloud): Redirect to org restoration page for customer domains (#45159) bug(replays): Fix 500 error when marshaling tags field (#45097) ref(sourcemaps): Redesign lookup of source and sourcemaps (#45032) chore: Investigating org slug already set to a different value (#45134) feat(dynamic-sampling): Implement prioritize by project bias [TET-574] (#42939) feat(dynamic-sampling): Add transaction name prioritize option - (#45034) feat(dyn-sampling): add new bias toggle to project details for prioritise by tx name [TET-717] (#44944) feat(admin) Add admin relay project config view [TET-509] (#45120) Revert "chore(assignment): Add analytics when autoassigning after a manual assignment (#45099)" feat(sourcemaps): Implement new tables supporting debug ids (#44572) ref(js): Remove usage of react-document-title (#45170) chore(py): Consistently name urls using `organization-` prefix (#45180) ref: rename acceptance required checks collector (#45156) chore(assignment): Add analytics when autoassigning after a manual assignment (#45099) feat(source-maps): Update copy for source map debug alerts (#45164) ref(js): Remove custom usage of DocumentTitle (#45165) chore(login): update the login banners (#45151) ref(py): Remove one more legacy project_id from Environment (#45160) ...
…5139) It seems that we're tagging events with information about organizations even when it's impossible to have any org info. We have events tagged with the following info even when it is not possible: - org slug - org id - org context The only code that sets these three pieces of information is within the sdk.py module (see #45134 for details). This issues causes finding errors that are for the wrong organization when using `organization.slug`, thus, wasting time for engineers and customer support. It is unclear if the root cause comes from a bug in the SDK or the way we tag things in our code base. This change only clears the invalid tag values to reduce the impact of this issue rather than fixing the root issue. A sample event can be seen [here](https://sentry.sentry.io/discover/sentry:99c8b9c9d2c241cc90e52c307faa45eb/?field=organization.slug&field=user.display&field=timestamp&name=Integration.DoesNotExist%3A+Integration+matching+query+does+not+exist.&project=1&query=issue%3ASENTRY-W7T&sort=-timestamp&statsPeriod=90d&yAxis=count%28%29) (private link) Here's a [query](https://sentry.sentry.io/discover/results/?field=transaction&field=count_unique%28organization.slug%29&field=count%28%29&name=Integration.DoesNotExist%3A+Integration+matching+query+does+not+exist.&project=1&query=%21organization.slug%3A%22%22+error.value%3A%22Integration+matching+query+does+not+exist.%22&sort=-transaction&statsPeriod=14d&yAxis=count%28%29) showing some transactions that should not have org values set. There may be more transaction since I'm narrowing the query down to just `error.value:"Integration matching query does not exist."` Improves [WOR-2464](https://getsentry.atlassian.net/browse/WOR-2464)
We have events tagged with the following info even when it is not possible:
The only code that sets these three pieces of information is within the sdk.py module.
This change will report a Sentry error (it won't abort the execution of the code).
A sample event can be seen here (private link)
Here's a query showing some transactions that should not have these values set. There may be more transaction since I'm narrowing the query down to just
error.value:"Integration matching query does not exist."
This fix helps investigate: WOR-2464
Screenshot showing org context being set:
