Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sentry stopped accepting transaction data #2876

Open
ingria opened this issue Mar 10, 2024 · 54 comments
Open

Sentry stopped accepting transaction data #2876

ingria opened this issue Mar 10, 2024 · 54 comments

Comments

@ingria
Copy link

ingria commented Mar 10, 2024

Self-Hosted Version

24.3.0.dev0

CPU Architecture

x86_x64

Docker Version

24.0.4

Docker Compose Version

24.0.4

Steps to Reproduce

Update to the latest master

Expected Result

Everything works fine

Actual Result

Performance page shows zeros for the time period since the update and until now:

image

Project page shows the correct info about transactions and errors:

image

Stats page shows 49k transactions of which 49k are dropped:

image

Same for errors:

image

Event ID

No response

UPD

there are a lot of errors in clickhouse container:

2024.03.10 23:40:34.789282 [ 46 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):

0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
2. DB::ReadBufferFromPocoSocket::ReadBufferFromPocoSocket(Poco::Net::Socket&, unsigned long) @ 0x101540cd in /usr/bin/clickhouse
3. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6fd5 in /usr/bin/clickhouse
4. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
5. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
6. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
7. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
8. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
10. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
 (version 21.8.13.1.altinitystable (altinity build))
@ingria
Copy link
Author

ingria commented Mar 10, 2024

Also, for some reason Sentry started dropping incoming errors some time ago (as if I was using saas sentry):

image

@barisyild
Copy link

Did you change the port?
I had the same situation when I changed the port.

@ingria
Copy link
Author

ingria commented Mar 10, 2024

Yes, I have the relay port exposed to the host network. How did you manage to fix the problem?

@barisyild
Copy link

Yes, I have the relay port exposed to the host network. How did you manage to fix the problem?

When I reverted the port change the problem was resolved.

@ingria
Copy link
Author

ingria commented Mar 10, 2024

Nope, didn't help. Doesn't work even with default config. Thanks for the tip though

@hubertdeng123
Copy link
Member

Are there any logs in your web container that can help? Are you sure you are receiving the event envelopes? You should be able to see that activity in your nginx container.

@linxiaowang
Copy link

Same here, on the browser side, there is a request sent with an event type of "transaction", but there is no data displayed under "performance", and the number of transactions in the project is also 0.

@linxiaowang
Copy link

Same here, on the browser side, there is a request sent with an event type of "transaction", but there is no data displayed under "performance", and the number of transactions in the project is also 0.

Problem solved, server time not match the sdk time.

@ingria
Copy link
Author

ingria commented Mar 14, 2024

I can see that there are successful requests to /api/2/envelope:

image

Also I can see transaction statistics on the projects page:

Number 394k for the last 24 hours is about right.

@hubertdeng123
Copy link
Member

Are you on a nightly version of self-hosted? What does your sentry.conf.py look like? We've added some feature flags there to support the new performance features

@ingria
Copy link
Author

ingria commented Mar 15, 2024

I'm using docker with the latest commit from this repository. Bottom of the page says Sentry 24.3.0.dev0 unknown. So I guess that's nightly.

I've updated sentry.conf.py to match the most recent version from this repo - now the only difference is in SENTRY_SINGLE_ORGANIZATION and CSRF_TRUSTED_ORIGINS variables.

After that, errors have also disappeared:

image

@williamdes
Copy link
Contributor

williamdes commented Mar 16, 2024

I can confirm that the clickhouse errors are due to the Rust workers, reverting the workers part of #2831 and #2861
made the errors disappear.
But still I have a too high dropping of transactions since the upgrade.

Worker code: https://github.com/getsentry/snuba/blob/359878fbe030a63945914ef05e705224680b453c/rust_snuba/src/strategies/clickhouse.rs#L61

Workers logs show that insert is done (is it ?): "timestamp":"2024-03-16T11:40:52.491448Z","level":"INFO","fields":{"message":"Inserted 29 rows"},

@aldy505
Copy link
Collaborator

aldy505 commented Mar 18, 2024

The error is caused by connection being prematurely closed. See #2900

@LvckyAPI
Copy link
Contributor

Same issue on latest 24.3.0

image

image

@LvckyAPI
Copy link
Contributor

errors are not logged to

@aldy505
Copy link
Collaborator

aldy505 commented Mar 19, 2024

Okay so I'm able to replicate this issue on my instance (24.3.0). What happen is that Sentry does accept transaction/errors/profiles/replays/attachments data, but it doesn't record it on the statistics. So your stats of ingested events might be displayed as is there were no events being recorded, but actually the events are there -- it's processed by Snuba and you can view it on the web UI.

Can anyone reading this confirm that that's what happened on your instances as well? (I don't want to ping everybody)

If the answer to that 👆🏻 is "yes", that means something (a module, container, or something) that ingest the events didn't do data insertion correctly for it to be queried as statistics. I don't know for sure whether it's the responsibility of Snuba consumers (as we moved to rust-consumer just on 24.3.0) or Sentry workers, but I'd assume it's Snuba consumers.

A few solution (well not really but I hope this would get rid of this issue) for this is, either:

  1. Fix the issue somewhere, cut a patch release.
  2. If it's caused by rust-consumers, then we might rollback the usage of rust-consumer and just go back to old Python ones.

@LvckyAPI
Copy link
Contributor

Okay so I'm able to replicate this issue on my instance (24.3.0). What happen is that Sentry does accept transaction/errors/profiles/replays/attachments data, but it doesn't record it on the statistics. So your stats of ingested events might be displayed as is there were no events being recorded, but actually the events are there -- it's processed by Snuba and you can view it on the web UI.

Can anyone reading this confirm that that's what happened on your instances as well? (I don't want to ping everybody)

If the answer to that 👆🏻 is "yes", that means something (a module, container, or something) that ingest the events didn't do data insertion correctly for it to be queried as statistics. I don't know for sure whether it's the responsibility of Snuba consumers (as we moved to rust-consumer just on 24.3.0) or Sentry workers, but I'd assume it's Snuba consumers.

A few solution (well not really but I hope this would get rid of this issue) for this is, either:

  1. Fix the issue somewhere, cut a patch release.
  2. If it's caused by rust-consumers, then we might rollback the usage of rust-consumer and just go back to old Python ones.

I didn't see any errors in the Issues tab. I had to rebuild a Server Snapshot to “fix” this problem. So it wasn't just the statistics that were affected.

@williamdes
Copy link
Contributor

williamdes commented May 11, 2024

I noticed that restarting all containers does some kind of state reset. But after some time the relay seems unable to handle it's own load: envelope buffer capacity exceeded since the config is missing.
i suspect the root cause to be this error in the relay logs:

can't fetch project states failure_duration="42716 seconds" backoff_attempts=7
relay-1  | 2024-05-11T10:53:22.079788Z ERROR relay_server::services::project_upstream: error fetching project states error=upstream request returned error 500 Internal Server Error

I would be very happy if anyone had a clue of where to search.

The web logs report 200 OK in their logs.
Full relay config above: #2876 (comment)
I did change a bit the http block to try to mitigate this issue:

http:
  timeout: 60
  connection_timeout: 60
  max_retry_interval: 60

In the cron logs I did find:

celery.beat.SchedulingError: Couldn't apply scheduled task deliver-from-outbox-control: Command # 1 (LLEN outbox.control) of pipeline caused error: OOM command not allowed when used memory > 'maxmemory

Seems that the Docker host needs vm.overcommit_memory sysctl for Redis: redis/docker-library-redis#298 (comment)

Plus maxmemory of Redis/Keydb was too low

But

ERROR relay_server::services::project_upstream: error fetching project state cbb152173d0b4451b3453b05b58dddee: deadline exceeded errors=0 pending=88 tags.did_error=false tags.was_pending=true tags.project_key="xxxxxx"

persists

Seems like this was previously reported as #1929


With the help of this incredible tcpdump command (https://stackoverflow.com/a/16610385/5155484) I managed to see the reply web did:

{"configs":{},"pending":["cbb152173d0b4451b3453b05b58dddee","084e50cc07ad4b9f862a3595260d7aa1"]}

Request: POST /api/0/relays/projectconfigs/?version=3 HTTP/1.1

{"publicKeys":["cbb152173d0b4451b3453b05b58dddee","084e50cc07ad4b9f862a3595260d7aa1"],"fullConfig":true,"noCache":false}

@khassad
Copy link

khassad commented May 13, 2024

Hi, we have the same kind of issues,

As another data point, it appears that our Sentry instance is correctly ingesting events. However, the Stats page is showing 0 accepted/filtered/dropped since the day that rust consumers were merged into master

Hi, same issue here, I can see transactions ingested but nothing in stats, be it in projects or other aggregated view.

@fmiqbal
Copy link

fmiqbal commented May 13, 2024

I encounter relay problem the other day because of envelope buffer capacity exceeded and then my disk is full, and I usually resort to nuking option, that is delete kafka and zookeeper, but it doesnt immediately work, the relay still spewing bunch of timeout error, then after some digging I realize there is also possibly problem with redis, so I check redis and the rdb size is 2GB, I delete that, restart everything, and it works again ..

I still dont know the initial problem tho

@khassad
Copy link

khassad commented May 13, 2024

After upgrading to 24.4.2 I get back some stats on Performance and Profiling pages.
But still nothing on the Projects dashboard, individual projects and related views.

@khassad
Copy link

khassad commented May 13, 2024

I confirm that the fix mentioned by @combrs works:

%s/rust-consumer/consumer/g on your docker-compose.yaml and the problem goes away

Unfortunately this did not help in our case.
We don't have any transaction or session info in Projects-related views.

@msxdan
Copy link

msxdan commented May 14, 2024

I confirm that the fix mentioned by @combrs works:
%s/rust-consumer/consumer/g on your docker-compose.yaml and the problem goes away

Unfortunately this did not help in our case. We don't have any transaction or session info in Projects-related views.

That worked in our case, but it took like 1 day before it started showing something in performance.

@williamdes
Copy link
Contributor

Feel free to join my Discord thread: https://discord.com/channels/621778831602221064/1243470946904445008
Discord join link: https://discord.gg/TvshEMuG
For now I found a solution that works pretty well.
When the node starts complaining about the config I run the following: docker exec -it sentry-self-hosted-redis-1 sh -c 'redis-cli DEL relay_config'

Then do a docker compose down and docker compose up.
Please let me know if this works.

@liukch
Copy link

liukch commented Jun 19, 2024

I think this PR #2908 would fix this issue,I don't know why it is still not be merged for a long time.

@williamdes
Copy link
Contributor

I think this PR #2908 would fix this issue,I don't know why it is still not be merged for a long time.

Reverting to old Python code is not a solution. Fix the Rust code is a good one.

@rojinebrahimi
Copy link

Hey everyone!
I reverted the consumers to python but unfortunately I am still not able to see my transactions. Besides, I have come across some issues on ClickHouse:

2024.07.07 11:11:58.974263 [ 19750 ] {7760523f29425408e575ceb5fbd61469} <Error> TCPHandler: Code: 46. DB::Exception: Unknown function notHandled: While processing ((client_timestamp AS _snuba_timestamp) >= toDateTime('2024-06-27T11:11:58', 'Universal')) AND (_snuba_timestamp < toDateTime('2024-07-07T11:11:55', 'Universal')) AND ((project_id AS _snuba_project_id) IN tuple(17)) AND (notHandled() = 1) AND ((occurrence_type_id AS _snuba_occurrence_type_id) IN (4001, 4002, 4003)) AND (_snuba_project_id IN tuple(17)) AND ((group_id AS _snuba_group_id) IN (187756,...

Does anyone possibly know the solution or the root cause of this problem?

@hheexx
Copy link

hheexx commented Sep 10, 2024

I have this problem again after updating to 24.8.0!

@hheexx
Copy link

hheexx commented Sep 10, 2024

---except that reverting to python snuba does not work. No more errors but still does not work.

@khassad
Copy link

khassad commented Sep 10, 2024

I have this problem again after updating to 24.8.0!

Same behavior here, transactions showed up partially (some data in stats area) but not in global projects pages or individual project page 😢

@DarkByteZero
Copy link

I had issues with stopped ingestion, but my issue was that I didn't have COMPOSE_PROFILES=feature-complete in my custom env

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Status: Waiting for: Product Owner
Status: No status
Development

No branches or pull requests