-
Notifications
You must be signed in to change notification settings - Fork 536
Internal error in sentry_sdk #2732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Assigning to @getsentry/support for routing ⏲️ |
Routing to @getsentry/product-owners-issues for triage ⏲️ |
Likely to be caused by #2386 |
I am getting the same issue as above, but I haven't yet tried the custom |
Hey @manuellazzari-cargoone and @kieran-sf! Thanks for reporting this. My first suggestion would've been exactly what you tried @manuellazzari-cargoone with the custom socket options from #1198 (comment). Are you seeing at least some improvement? I'm curious whether spacing out the events sent from the SDK makes a difference. Can you try overriding import time
class KeepAliveHttpTransport(HttpTransport):
def _send_request(self, *args, **kwargs):
time.sleep(0.01)
super()._send_request(self, *args, **kwargs) |
@sentrivana thanks for getting back -- when trying the custom socket settings I wasn't able to see any major differences. Now I'm testing the whole package (custom socket settings and custom send request). |
@sentrivana the issue is still there even by adding the custom
|
@manuellazzari-cargoone Thanks for the follow up. It looks like the sleep might've had a tiny effect, at least from the logs it looks like there's less occurrences if you look at a comparable time span before and after -- but this could obviously have to do with traffic etc., so don't think this is exactly conclusive. Are you seeing any network errors for outgoing requests anywhere else in your system? Just trying to rule out general network instability. Alternatively I'm thinking if there's anything special about the errors/transactions -- maybe them being unusually big so it takes long to send each one and the server drops the connection? Fiddling around with the socket options might make a difference, too. |
I'm observing this issue just with 2 services both running with
I'm not sure about size, but I know for sure we have services deal with a lot more traffic and a lot bigger packages. In particular, one of the affected services is dealing with very little traffic and small packages and still experiencing the problem. |
Thanks! I think in general we need to reconsider our transport logic and possibly add retries for more cases. (Currently we only retry if we're rate limited.) But at the same time, with this particular error there's no telling if the server actually processed the event and just crapped out at the very end, so it's not clear cut. |
FYI @manuellazzari-cargoone we have now experimental Maybe this solves your problem? |
HI Error:
|
@andres06-hub the error you are experiencing looks different. You are hitting a max retry error. Perhaps there is some issue with the network connection to Sentry? If not, then please open a separate issue with a full explanation of how to reproduce the behavior you are observing, so that we can help you effectively. |
Environment
SaaS (https://sentry.io/)
Steps to Reproduce
I am randomly getting the following error from some of my Python services. The SDK seems it is sending events correctly to sentry.io. No clue if it is skipping some of them. The error seems to occur more with load.
The service is mainly a Flask Python 3.11 g, run by
guincorn
on a kubernetes cluster in multiple instances on Google Cloud. All dependencies are basically up to date, and I'm usingsentry-sdk-1.40.2
andurllib3-2.0.7
.In the attempt of fixing it, I tried to customize as follows the
HttpTransport
used by Sentry SDK, with no luck.Expected Result
No connection errors
Actual Result
Product Area
Issues
Link
No response
DSN
https://1e425c4937e14585ab35335aa4810004@o155318.ingest.sentry.io/4503976860385280
Version
No response
The text was updated successfully, but these errors were encountered: