Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The replay associated with this event cannot be found. In most cases, the replay wasn't accepted because your replay quota was exceeded at the time. #1386

Closed
serrrios opened this issue Aug 14, 2024 · 20 comments

Comments

@serrrios
Copy link

Has anyone encountered this? I installed version Sentry 24.2.0, followed by an update via chart to the current version.

│ 13:27:03 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_count.OrganizationReplayCountEndpoint' response=200 user_id='19' is_app='False' token_type='None' is_frontend_request='True' organization_id='4507452958310400' auth │
│ Traceback (most recent call last): │
│ File "/usr/src/sentry/src/sentry/snuba/referrer.py", line 920, in validate_referrer │
│ raise Exception(error_message) │
│ Exception: referrer replays.query.details_query is not part of Referrer Enum │
│ 13:27:16 [WARNING] sentry.snuba.referrer: referrer replays.query.details_query is not part of Referrer Enum │
│ Traceback (most recent call last): │
│ File "/usr/src/sentry/src/sentry/api/base.py", line 320, in handle_exception │
│ response = super().handle_exception(exc) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/.venv/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception │
│ self.raise_uncaught_exception(exc) │
│ File "/.venv/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception │
│ raise exc │
│ File "/usr/src/sentry/src/sentry/api/base.py", line 452, in dispatch │
│ response = handler(request, args, kwargs) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/replays/endpoints/organization_replay_details.py", line 73, in get │
│ snuba_response = query_replay_instance( │
│ ^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/replays/query.py", line 94, in query_replay_instance │
│ return execute_query( │
│ ^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/replays/usecases/query/init.py", line 453, in execute_query │
│ return raw_snql_query( │
│ ^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 896, in raw_snql_query │
│ return bulk_snuba_queries( │
│ ^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 915, in bulk_snuba_queries │
│ return bulk_snuba_queries_with_referrers( │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 957, in bulk_snuba_queries_with_referrers │
│ return _apply_cache_and_build_results(snuba_requests, use_cache=use_cache) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 1032, in _apply_cache_and_build_results │
│ query_results = _bulk_snuba_query([item[1] for item in to_query]) │
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │
│ File "/usr/src/sentry/src/sentry/utils/snuba.py", line 1147, in _bulk_snuba_query │
│ raise clickhouse_error_codes_map.get(error["code"], QueryExecutionError)( │
│ sentry.utils.snuba.QueryMissingColumn: DB::Exception: Unknown expression identifier '_snuba_error_id_no_dashes' in scope error_id_no_dashes -> _snuba_error_id_no_dashes. Maybe you meant: ['error_id_no_dashes']. Stack trace: │
│ │
│ 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c9a449b │
│ 1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000780b9ac │
│ 2. DB::(anonymous namespace)::QueryAnalyzer::resolveExpressionNode(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&, bool, bool) @ 0x0000000010a87c5a │
│ 3. DB::(anonymous namespace)::QueryAnalyzer::resolveLambda(std::shared_ptrDB::IQueryTreeNode const&, std::shared_ptrDB::IQueryTreeNode const&, std::vector<std::shared_ptrDB::IQueryTreeNode, std::allocator<std::shared_ptrDB::IQueryTreeNode>> const&, DB::(anonymous nam │
│ 4. DB::(anonymous namespace)::QueryAnalyzer::resolveFunction(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x0000000010aa1245 │
│ 5. DB::(anonymous namespace)::QueryAnalyzer::resolveExpressionNode(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&, bool, bool) @ 0x0000000010a841bc │
│ 6. DB::(anonymous namespace)::QueryAnalyzer::resolveExpressionNodeList(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&, bool, bool) @ 0x0000000010a831ed │
│ 7. DB::(anonymous namespace)::QueryAnalyzer::resolveProjectionExpressionNodeList(std::shared_ptrDB::IQueryTreeNode&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x0000000010a8e4e7 │
│ 8. DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptrDB::IQueryTreeNode const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x0000000010a7a5f8 │
│ 9. DB::QueryAnalysisPass::run(std::shared_ptrDB::IQueryTreeNode&, std::shared_ptr<DB::Context const>) @ 0x0000000010a780c5 │
│ 10. DB::QueryTreePassManager::run(std::shared_ptrDB::IQueryTreeNode) @ 0x0000000010a76983 │
│ 11. DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptrDB::IAST const&, DB::SelectQueryOptions const&, std::shared_ptr<DB::Context const> const&, std::shared_ptrDB::IStorage const&) (.llvm.9862110563685019565) @ 0x0000000010d0aafd │
│ 12. DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptrDB::IAST const&, std::shared_ptr<DB::Context const> const&, DB::SelectQueryOptions const&) @ 0x0000000010d09899 │
│ 13. std::unique_ptr<DB::IInterpreter, std::default_deleteDB::IInterpreter> std::__function::__policy_invoker<std::unique_ptr<DB::IInterpreter, std::default_deleteDB::IInterpreter> (DB::InterpreterFactory::Arguments const&)>::__call_impl<std::__function::__default_alloc_f │
│ 14. DB::InterpreterFactory::get(std::shared_ptrDB::IAST&, std::shared_ptrDB::Context, DB::SelectQueryOptions const&) @ 0x0000000010c9ec79 │
│ 15. DB::executeQueryImpl(char const
, char const
, std::shared_ptrDB::Context, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer
) @ 0x000000001111a030 │
│ 16. DB::executeQuery(String const&, std::shared_ptrDB::Context, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x00000000111169ba │
│ 17. DB::TCPHandler::runImpl() @ 0x00000000122a59c4 │
│ 18. DB::TCPHandler::run() @ 0x00000000122c1fb9 │
│ 19. Poco::Net::TCPServerConnection::start() @ 0x0000000014c105b2 │
│ 20. Poco::Net::TCPServerDispatcher::run() @ 0x0000000014c113f9 │
│ 21. Poco::PooledThread::run() @ 0x0000000014d09a61 │
│ 22. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000014d07ffd │
│ 23. start_thread @ 0x0000000000007fa3 │
│ 24. ? @ 0x00000000000f8fef │
│ │
│ 13:27:16 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_details.OrganizationReplayDetailsEndpoint' response=500 user_id='19' is_app='False' token_type='None' is_frontend_request='True' organization_id='4507452958310400' │
│ 13:27:16 [ERROR] django.request: Internal Server Error: /api/0/organizations/apix/replays/f87576532e624105a0e2ccc9878c5253/ (status_code=500 request=<WSGIRequest: GET '/api/0/organizations/apix/replays/f87576532e624105a0e2ccc9878c5253/'>)

@serrrios
Copy link
Author

@qkfrksvl , maybe you have answer?

@dromadaire54
Copy link
Contributor

I have the same issue.

@dromadaire54
Copy link
Contributor

@serrrios you have this error in snuba-api deployment when you want to display a replay ?

@dromadaire54
Copy link
Contributor

image

@dromadaire54
Copy link
Contributor

@Mokto or @TartanLeGrand do you have any idea about this issue ?

@serrrios
Copy link
Author

serrrios commented Aug 19, 2024

@dromadaire54, yes, in the Snuba API, I see the same error. Also in the interface, as shown in the screenshot. I studied the problem in more detail, and the error occurs during a similar query to the ClickHouse database:

SELECT (replay_id AS _snuba_replay_id), _snuba_replay_id, (anyIf((project_id AS _snuba_project_id), equals((segment_id AS _snuba_segment_id), 0)) AS _snuba_agg_project_id), (arrayMap(trace_id -> (replaceAll(toString(trace_id), '-', '') AS _snuba_trace_id), groupUniqArrayArray((trace_ids AS _snuba_trace_ids))) AS _snuba_traceIds), (min((replay_start_timestamp AS _snuba_replay_start_timestamp)) AS _snuba_started_at), (max((timestamp AS _snuba_timestamp)) AS _snuba_finished_at), (dateDiff('second', _snuba_started_at, _snuba_finished_at) AS _snuba_duration), (arrayFlatten(arraySort(urls, sequence_id -> identity(sequence_id), arrayMap(url_tuple -> tupleElement(url_tuple, 2), (groupArray((_snuba_segment_id, (urls AS _snub
a_urls))) AS _snuba_agg_urls)), arrayMap(url_tuple -> tupleElement(url_tuple, 1), _snuba_agg_urls))) AS _snuba_urls_sorted), _snuba_agg_urls, (count(_snuba_segment_id) AS _snuba_count_segments), (sum(length(_snuba_urls)) AS _snuba_count_urls), (sumIf((click_is_dead AS _snuba_click_is_dead), greaterOrEquals(_snuba_timestamp, toDateTime('2023-07-24T00:00:00', 'Universal'))) AS _snuba_count_dead_clicks), (sumIf((click_is_rage AS _snuba_click_is_rage), greaterOrEquals(_snuba_timestamp, toDateTime('2023-07-24T00:00:00', 'Universal'))) AS _snuba_count_rage_clicks), (ifNull(max((is_archived AS _snuba_is_archived)), 0) AS _snuba_isArchived), (floor(greatest(1, least(10, intDivOrZero(plus(multiply((sum((count_error_events AS _snuba_count_error_events)) AS _snuba_count_errors), 25), multiply(_snuba_count_urls, 5)), 10)))) AS _snuba_act
ivity), (groupUniqArrayIf((release AS _snuba_release), notEmpty(_snuba_release)) AS _snuba_releases), (anyIf(replay_type, notEmpty(replay_type)) AS _snuba_replay_type), (anyIf(platform, notEmpty(platform)) AS _snuba_platform), (anyIf((environment AS _snuba_environment), notEmpty(_snuba_environment)) AS _snuba_agg_environment), (anyIf(dist, notEmpty(dist)) AS _snuba_dist), (anyIf(user_id, notEmpty(user_id)) AS _snuba_user_id), (anyIf(user_email, notEmpty(user_email)) AS _snuba_user_email), (anyIf((user_name AS _snuba_user_name), notEmpty(_snuba_user_name)) AS _snuba_user_username), (IPv4NumToString(anyIf((ip_address_v4 AS _snuba_ip_address_v4), greater(_snuba_ip_address_v4, 0))) AS _snuba_user_ip), (anyIf(os_name, notEmpty(os_name)) AS _snuba_os_name), (anyIf(os_version, notEmpty(os_version)) AS _snuba_os_version), (anyIf(browser_name, notEmpty(browser_name)) AS _snuba_browser_name), (anyIf(browser_version, notEmpty(browser_version)) AS _snuba_browser_version), (anyIf(device_name, notEmpty(device_name)) AS _snuba_device_name), (anyIf(device_brand, notEmpty(device_brand)) AS _snuba_device_brand), (anyIf(device_family, notEmpty(device_family)) AS _snuba_device_family), (anyIf(device_model, notEmpty(device_model)) AS _snuba_device_model), (anyIf(sdk_name, notEmpty(sdk_name)) AS _snuba_sdk_name), (anyIf(sdk_version, notEmpty(sdk_version)) AS _snuba_sdk_version), (groupArrayArray((tags.key AS `_snuba_tags.key`)) AS _snuba_tk), (groupArrayArray((tags.value AS `_snuba_tags.value`)) AS _snuba_tv), (groupArray(click_alt) AS _snuba_click_alt), (groupArray(click_aria_label) AS _snuba_click_aria_label), (groupArrayArray((click_class AS _snuba_click_class)) AS
 _snuba_clickClass), (groupArray(_snuba_click_class) AS _snuba_click_classes), (groupArray(click_id) AS _snuba_click_id), (groupArray(click_role) AS _snuba_click_role), (groupArray(click_tag) AS _snuba_click_tag), (groupArray(click_testid) AS _snuba_click_testid), (groupArray(click_text) AS _snuba_click_text), (groupArray(click_title) AS _snuba_click_title), (groupArray(click_component_name) AS _snuba_click_component_name), (arrayMap(error_id_no_dashes -> (replaceAll(toString(error_id_no_dashes), '-', '') AS _snuba_error_id_no_dashes), arrayDistinct(flatten([(groupArrayArray((error_ids AS _snuba_error_ids)) AS _snuba_old_err_ids_for_new_query), arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((error_id AS _snuba_error_id))), arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((fatal_id AS _snuba_fatal_id)))]))) AS _snuba_errorIds), (arrayMap(error_id_no_dashes -> _snuba_error_id_no_dashes, flatten([arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((warning_id AS _snuba_warning_id)))])) AS _snuba_warning_ids), (arrayMap(error_id_no_dashes -> _snuba_error_id_no_dashes, flatten([arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((info_id AS _snuba_info_id))), arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((debug_id AS _snuba_debug_id)))])) AS _snuba_info_ids), _snuba_count_errors, (sum((count_warning_events AS _snuba_count_warning_events)) AS _snuba_count_warnings), (sum((count_info_events AS _snuba_count_info_events)) AS _snuba_count_infos), (groupUniqArrayIf((viewed_by_id AS _snuba_viewed_by_id), greater(_snuba_viewed_by_id, 0)) AS _snuba_viewed_by_ids), (greater(sum(equals(_snuba_viewed_by_id, 1)), 0) AS _snuba_has_viewed) FROM replays_dist WHERE in(_snuba_project_id, [27, 30, 16, 33, 26, 34]) AND in(_snuba_replay_id, ['657bb65d-1c55-4925-a6a8-3f819ee8
e6f2', '23853344-bd96-4f96-90f6-ead3362bbbaf', '9cada016-3575-47e2-9e79-15b5ba383074', '593ecf4d-f583-41b7-b659-2abf910f0c1b', '90e4ca0f-5582-48a8-ab50-11272992b03b', '057b10f5-54e7-476f-9161-df83c8dd4566', '708e13fb-ecda-47d8-bf59-164160636acc', '743437fb-9766-4483-a6bb-3ae89e81503f', 'bd80b568-a49e-4e3b-87e7-64a57a20fcf3', '01de4e6a-68c5-4acf-b2fe-4d79832298cd', '331e7652-f917-4ac3-829a-61b3b57fc852', '4df8182c-5649-4360-865c-22afa07b9a2a', '94c0145e-d867-4885-9be4-f19dda1aeffb', 'c9f153e1-df59-454a-902a-9723e8bc4a07', '7ddb09e6-3e14-4011-aff4-d5d0bc
e330cc', 'b986e39a-d9ae-4bce-b906-89e85ba277f7', '7b854146-e5ee-4bf7-a31a-8ba47360111b', '55ce563f-1346-4bae-ac82-c96810bc9831', '96e37bc8-d082-450e-bc11-da1f9416d315', '921d2c1c-069b-4416-a8e0-bbe69e504f55', '219afaf1-90da-4ecc-b0cf-6a02d497ef4c', '38373410-a854-4ede-8f45-b6a977a2f02d', 'b683f610-c9e1-47c6-a64c-1a259f66c5e3', '69a3eff5-a501-45bf-a221-7862e3606cdc', 'da9e14f7-8e35-4905-a14b-a3be4df972fb', '6be6040c-0813-4d0f-9b6a-6ea37e138953', '62dbb673-3162-40b2-8c9d-9b41bc781184', '5e5d38d5-5327-40b0-934c-e0b7ced3db0e', 'f3024bfc-c211-43e7-bdcb-8c0506fce487', 'd55fed68-177f-4924-bce4-6cc40f56e303', '16bfb269-5ac8-4a79-8efa-292dc999af56', '279ee96d-bc84-424a-9554-b4affa1a3354', '9fd66fab-fbc6-41d6-ad8a-bbd0935a9872', 'f49ea234-b526-4b11-bcf5-674ce12236b9', '44039ca4-11f8-4401-9d3d-9d3b357fe5af', 'fe8bb508-9b4e-45df-b2de-f8efa82abccd', 'bf364f49-fc5e-42cb-88c4-5414efb1d10e', 'f47d4abf-bd01-4496-8399-a82464d217cf', 'c4553297-7c43-4e82-9e68-3871dbab8f61', '9e1ec31c-9bcc-4f93-98e0-7a9d90f24855', '05c58d9c-a511-4d38-bad3-003a987a6336', 'b8edba3e-b767-4db0-be84-2a3ecd975d9f', '9d7f8fe5-a11d-4a40-8836-1446c957ffd2', '32436c16-18c3-46a0-82d0-0b8acead7f2b', '0fa5627e-aba7-465e-9199-c21ef778ada3', '9c6f603b-14cd-4d76-b8bc-9f9bb8680c9a', '40be7bda-7614-4887-98d3-2cca3dc1d68e', '497e4690-2cca-44e2-a010-653dceccac3a', '31dff71a-cc5d-43fc-b9bc-fa4198bd9779', '778eabe8-e187-41a1-b862-572ed25ab943']) AND greaterOrEquals(_snuba_timestamp, toDateTime('2024-08-18T18:04:28', 'Universal')) AND less(_snuba_timestamp, toDateTime('2024-08-19T20:04:28', 'Universal')) GROUP BY _snuba_replay_id HAVING equals(min(_snuba_segment_id), 0) LIMIT 1000 OFFSET 0

As we can see, error_id_no_dashes occurs multiple times:
(arrayMap(error_id_no_dashes -> (replaceAll(toString(error_id_no_dashes), '-', '') AS _snuba_error_id_no_dashes), arrayDistinct(flatten([(groupArrayArray((error_ids AS _snuba_error_ids)) AS _snuba_old_err_ids_for_new_query), arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((error_id AS _snuba_error_id))), arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((fatal_id AS _snuba_fatal_id)))]))) AS _snuba_errorIds), (arrayMap(error_id_no_dashes -> _snuba_error_id_no_dashes, flatten([arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((warning_id AS _snuba_warning_id)))])) AS _snuba_warning_ids), (arrayMap(error_id_no_dashes -> _snuba_error_id_no_dashes, flatten([arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((info_id AS _snuba_info_id))), arrayFilter(id -> notEquals(id, '00000000-0000-0000-0000-000000000000'), groupArray((debug_id AS _snuba_debug_id)))])) AS _snuba_info_ids)

This part of the query is generated by the following code:
https://github.com/getsentry/sentry/blob/master/src/sentry/replays/query.py
`def _strip_uuid_dashes(
input_name: str,
input_value: Expression,
alias: str | None = None,
aliased: bool = True,
):
return Function(
"replaceAll",
parameters=[Function("toString", parameters=[input_value]), "-", ""],
alias=alias or input_name if aliased else None,
)

def _collect_event_ids(alias, ids_type_list):
id_types_to_aggregate = []
for id_type in ids_type_list:
id_types_to_aggregate.append(_filter_empty_uuids(id_type))

return Function(
    "arrayMap",
    parameters=[
        Lambda(
            ["error_id_no_dashes"],
            _strip_uuid_dashes("error_id_no_dashes", Identifier("error_id_no_dashes")),
        ),
        Function("flatten", [id_types_to_aggregate]),
    ],
    alias=alias,
)

def _collect_new_errors():
def _collect_non_empty_error_and_fatals():
return [
_filter_empty_uuids("error_id"),
_filter_empty_uuids("fatal_id"),
]

return Function(
    "arrayMap",
    parameters=[
        Lambda(
            ["error_id_no_dashes"],
            _strip_uuid_dashes("error_id_no_dashes", Identifier("error_id_no_dashes")),
        ),
        Function(
            "arrayDistinct",
            parameters=[
                Function(
                    "flatten",
                    [
                        [
                            Function(
                                "groupArrayArray",
                                parameters=[Column("error_ids")],
                                alias="old_err_ids_for_new_query",
                            ),
                            *_collect_non_empty_error_and_fatals(),
                        ]
                    ],
                ),
            ],
        ),
    ],
    alias="errorIds",
)

`

The first part of the query for 'errorIds' is generated by the function collect_event_ids, which calls _strip_uuid_dashes to wrap error_id_no_dashes in "error_id_no_dashes -> (replaceAll(toString(error_id_no_dashes), '-', '') AS _snuba_error_id_no_dashes". And this part of the query works.

The next part, which is supposed to handle everything except errorIds, is generated by the function collect_event_ids and also calls _strip_uuid_dashes, but does not wrap error_id_no_dashes according to the function and generating "error_id_no_dashes -> _snuba_error_id_no_dashes", which causes the error. If the function worked as it should, the query would be functional. Unfortunately, my knowledge of Python is not sufficient to fully understand the issue. I would appreciate any ideas.

@TartanLeGrand
Copy link
Contributor

Hello 👋,

Sentry chart version ? 😄

@serrrios
Copy link
Author

The last version of the chart I reached was v23.12.1. This was the version where I discovered the problem; previously, there was no need for replays. In an attempt to fix this issue, I added all the missing deployments similar to the self-hosted version and also upgraded to 24.8.0, but the situation did not change. As silly as it may sound, what is the likelihood of dependency on the versioning of external databases?

@serrrios
Copy link
Author

Also, based on @getsentry/self-hosted#3082, I tried to look for bugs during the update process, but once again I was unsuccessful. =(

@dromadaire54
Copy link
Contributor

I'm using the version 24.5.1 of sentry

@serrrios
Copy link
Author

@dromadaire54, what versions of the databases (PostgreSQL, ClickHouse) are you using? Are they external or from the chart? Have you tried recreating the ClickHouse database?

@TartanLeGrand
Copy link
Contributor

You are come from witch version of the chart ? and the version of the app ?

@serrrios
Copy link
Author

@TartanLeGrand As I mentioned earlier, I started with chart version 22.3.0 and application version 24.2.0, gradually updating to chart version approximately v23.5.2 and application version 24.5.1. I did not update further in the chart because the changes would not bring any innovations to my configuration. It was only on this version that I discovered this bug. After that, as I mentioned earlier, I synchronized with the self-hosted version, added the missing containers, and updated to the latest version.

@serrrios
Copy link
Author

lol, kek, I tried to install a version of ClickHouse on a separate server that is similar to the one in the chart and recreated the database; the replicas worked.

@dromadaire54
Copy link
Contributor

dromadaire54 commented Aug 22, 2024

For me this an external clickhouse 24.5 and the postgresql is 15.5 and it is an external service too.

@dromadaire54
Copy link
Contributor

In the clickhouse prod service it's working perfectly but in the dev service I have this error for the replays. This is exactly the same database. The clickhouse support doesn't why it working in the prod service.

@serrrios
Copy link
Author

My war with this case is over =) I tested many versions on a separate host: 21.8.13.6 (works), upgraded to 23.8.11.28 (works), upgraded to 24.4.1.2088 (just like in the production environment, doesn't work), upgraded to 24.8.1.2684 (doesn't work), rolled back to 23.8.11.28 and it worked again. I rolled back production from 24.4.1.2088 to 23.8.11.28 - the replays worked.
It seems that the ClickHouse update in the chart is being postponed)))

@dromadaire54
Copy link
Contributor

My case is different i have a sentry with a clickhouse v24.5 and postgresql 15.5 which are external services. I want to migrate a dev service in clickhouse with the same version and I get this error for the replays.

@boindil boindil mentioned this issue Sep 12, 2024
1 task
@patsevanton
Copy link
Contributor

patsevanton commented Sep 13, 2024

@serrrios please tell me the version of the helm chart and values.yaml where the error is not reproduced

@serrrios
Copy link
Author

I'm afraid my example won't help at all, as I've long moved away from that chart and am building my own deployment. If the question stems from a related issue, then I definitely have a different problem.

@serrrios serrrios closed this as completed Oct 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants