-
Notifications
You must be signed in to change notification settings - Fork 521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The replay associated with this event cannot be found. In most cases, the replay wasn't accepted because your replay quota was exceeded at the time. #1386
Comments
@qkfrksvl , maybe you have answer? |
I have the same issue. |
@serrrios you have this error in snuba-api deployment when you want to display a replay ? |
@Mokto or @TartanLeGrand do you have any idea about this issue ? |
@dromadaire54, yes, in the Snuba API, I see the same error. Also in the interface, as shown in the screenshot. I studied the problem in more detail, and the error occurs during a similar query to the ClickHouse database:
As we can see, error_id_no_dashes occurs multiple times: This part of the query is generated by the following code: def _collect_event_ids(alias, ids_type_list):
def _collect_new_errors():
` The first part of the query for 'errorIds' is generated by the function collect_event_ids, which calls _strip_uuid_dashes to wrap error_id_no_dashes in "error_id_no_dashes -> (replaceAll(toString(error_id_no_dashes), '-', '') AS _snuba_error_id_no_dashes". And this part of the query works. The next part, which is supposed to handle everything except errorIds, is generated by the function collect_event_ids and also calls _strip_uuid_dashes, but does not wrap error_id_no_dashes according to the function and generating "error_id_no_dashes -> _snuba_error_id_no_dashes", which causes the error. If the function worked as it should, the query would be functional. Unfortunately, my knowledge of Python is not sufficient to fully understand the issue. I would appreciate any ideas. |
Hello 👋, Sentry chart version ? 😄 |
The last version of the chart I reached was v23.12.1. This was the version where I discovered the problem; previously, there was no need for replays. In an attempt to fix this issue, I added all the missing deployments similar to the self-hosted version and also upgraded to 24.8.0, but the situation did not change. As silly as it may sound, what is the likelihood of dependency on the versioning of external databases? |
Also, based on @getsentry/self-hosted#3082, I tried to look for bugs during the update process, but once again I was unsuccessful. =( |
I'm using the version |
@dromadaire54, what versions of the databases (PostgreSQL, ClickHouse) are you using? Are they external or from the chart? Have you tried recreating the ClickHouse database? |
You are come from witch version of the chart ? and the version of the app ? |
@TartanLeGrand As I mentioned earlier, I started with chart version 22.3.0 and application version 24.2.0, gradually updating to chart version approximately v23.5.2 and application version 24.5.1. I did not update further in the chart because the changes would not bring any innovations to my configuration. It was only on this version that I discovered this bug. After that, as I mentioned earlier, I synchronized with the self-hosted version, added the missing containers, and updated to the latest version. |
lol, kek, I tried to install a version of ClickHouse on a separate server that is similar to the one in the chart and recreated the database; the replicas worked. |
For me this an external clickhouse 24.5 and the postgresql is 15.5 and it is an external service too. |
In the clickhouse prod service it's working perfectly but in the dev service I have this error for the replays. This is exactly the same database. The clickhouse support doesn't why it working in the prod service. |
My war with this case is over =) I tested many versions on a separate host: 21.8.13.6 (works), upgraded to 23.8.11.28 (works), upgraded to 24.4.1.2088 (just like in the production environment, doesn't work), upgraded to 24.8.1.2684 (doesn't work), rolled back to 23.8.11.28 and it worked again. I rolled back production from 24.4.1.2088 to 23.8.11.28 - the replays worked. |
My case is different i have a sentry with a clickhouse v24.5 and postgresql 15.5 which are external services. I want to migrate a dev service in clickhouse with the same version and I get this error for the replays. |
@serrrios please tell me the version of the helm chart and values.yaml where the error is not reproduced |
I'm afraid my example won't help at all, as I've long moved away from that chart and am building my own deployment. If the question stems from a related issue, then I definitely have a different problem. |
Has anyone encountered this? I installed version Sentry 24.2.0, followed by an update via chart to the current version.
│ 13:27:03 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_count.OrganizationReplayCountEndpoint' response=200 user_id='19' is_app='False' token_type='None' is_frontend_request='True' organization_id='4507452958310400' auth │
│ Traceback (most recent call last): │
│ File "/usr/src/sentry/src/sentry/snuba/referrer.py", line 920, in validate_referrer │
│ raise Exception(error_message) │
│ Exception: referrer replays.query.details_query is not part of Referrer Enum │
│ 13:27:16 [WARNING] sentry.snuba.referrer: referrer replays.query.details_query is not part of Referrer Enum │
│ Traceback (most recent call last): │
│ File "/usr/src/sentry/src/sentry/api/base.py", line 320, in handle_exception │
│ response = super().handle_exception(exc) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/.venv/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception │
│ self.raise_uncaught_exception(exc) │
│ File "/.venv/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception │
│ raise exc │
│ File "/usr/src/sentry/src/sentry/api/base.py", line 452, in dispatch │
│ response = handler(request, args, kwargs) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/replays/endpoints/organization_replay_details.py", line 73, in get │
│ snuba_response = query_replay_instance( │
│ ^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/replays/query.py", line 94, in query_replay_instance │
│ return execute_query( │
│ ^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/replays/usecases/query/init.py", line 453, in execute_query │
│ return raw_snql_query( │
│ ^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 896, in raw_snql_query │
│ return bulk_snuba_queries( │
│ ^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 915, in bulk_snuba_queries │
│ return bulk_snuba_queries_with_referrers( │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 957, in bulk_snuba_queries_with_referrers │
│ return _apply_cache_and_build_results(snuba_requests, use_cache=use_cache) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 1032, in _apply_cache_and_build_results │
│ query_results = _bulk_snuba_query([item[1] for item in to_query]) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 1147, in _bulk_snuba_query │
│ raise clickhouse_error_codes_map.get(error["code"], QueryExecutionError)( │
│ sentry.utils.snuba.QueryMissingColumn: DB::Exception: Unknown expression identifier '_snuba_error_id_no_dashes' in scope error_id_no_dashes -> _snuba_error_id_no_dashes. Maybe you meant: ['error_id_no_dashes']. Stack trace: │
│ │
│ 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c9a449b │
│ 1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000780b9ac │
│ 2. DB::(anonymous namespace)::QueryAnalyzer::resolveExpressionNode(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&, bool, bool) @ 0x0000000010a87c5a │
│ 3. DB::(anonymous namespace)::QueryAnalyzer::resolveLambda(std::shared_ptrDB::IQueryTreeNode const&, std::shared_ptrDB::IQueryTreeNode const&, std::vector<std::shared_ptrDB::IQueryTreeNode, std::allocator<std::shared_ptrDB::IQueryTreeNode>> const&, DB::(anonymous nam │
│ 4. DB::(anonymous namespace)::QueryAnalyzer::resolveFunction(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x0000000010aa1245 │
│ 5. DB::(anonymous namespace)::QueryAnalyzer::resolveExpressionNode(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&, bool, bool) @ 0x0000000010a841bc │
│ 6. DB::(anonymous namespace)::QueryAnalyzer::resolveExpressionNodeList(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&, bool, bool) @ 0x0000000010a831ed │
│ 7. DB::(anonymous namespace)::QueryAnalyzer::resolveProjectionExpressionNodeList(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x0000000010a8e4e7 │
│ 8. DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptrDB::IQueryTreeNode const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x0000000010a7a5f8 │
│ 9. DB::QueryAnalysisPass::run(std::shared_ptrDB::IQueryTreeNode&, std::shared_ptr<DB::Context const>) @ 0x0000000010a780c5 │
│ 10. DB::QueryTreePassManager::run(std::shared_ptrDB::IQueryTreeNode) @ 0x0000000010a76983 │
│ 11. DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptrDB::IAST const&, DB::SelectQueryOptions const&, std::shared_ptr<DB::Context const> const&, std::shared_ptrDB::IStorage const&) (.llvm.9862110563685019565) @ 0x0000000010d0aafd │
│ 12. DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptrDB::IAST const&, std::shared_ptr<DB::Context const> const&, DB::SelectQueryOptions const&) @ 0x0000000010d09899 │
│ 13. std::unique_ptr<DB::IInterpreter, std::default_deleteDB::IInterpreter> std::__function::__policy_invoker<std::unique_ptr<DB::IInterpreter, std::default_deleteDB::IInterpreter> (DB::InterpreterFactory::Arguments const&)>::__call_impl<std::__function::__default_alloc_f │
│ 14. DB::InterpreterFactory::get(std::shared_ptrDB::IAST&, std::shared_ptrDB::Context, DB::SelectQueryOptions const&) @ 0x0000000010c9ec79 │
│ 15. DB::executeQueryImpl(char const, char const, std::shared_ptrDB::Context, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer) @ 0x000000001111a030 │
│ 16. DB::executeQuery(String const&, std::shared_ptrDB::Context, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x00000000111169ba │
│ 17. DB::TCPHandler::runImpl() @ 0x00000000122a59c4 │
│ 18. DB::TCPHandler::run() @ 0x00000000122c1fb9 │
│ 19. Poco::Net::TCPServerConnection::start() @ 0x0000000014c105b2 │
│ 20. Poco::Net::TCPServerDispatcher::run() @ 0x0000000014c113f9 │
│ 21. Poco::PooledThread::run() @ 0x0000000014d09a61 │
│ 22. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000014d07ffd │
│ 23. start_thread @ 0x0000000000007fa3 │
│ 24. ? @ 0x00000000000f8fef │
│ │
│ 13:27:16 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_details.OrganizationReplayDetailsEndpoint' response=500 user_id='19' is_app='False' token_type='None' is_frontend_request='True' organization_id='4507452958310400' │
│ 13:27:16 [ERROR] django.request: Internal Server Error: /api/0/organizations/apix/replays/f87576532e624105a0e2ccc9878c5253/ (status_code=500 request=<WSGIRequest: GET '/api/0/organizations/apix/replays/f87576532e624105a0e2ccc9878c5253/'>)
The text was updated successfully, but these errors were encountered: