-
-
Notifications
You must be signed in to change notification settings - Fork 227
Description
Package
Sentry.AspNetCore
Sentry.OpenTelemetry
.NET Flavor
.NET Core
.NET Version
9.0.111
OS
Linux
OS Version
Debian 12 (official aspnetcore docker image)
Development Environment
Rider 2024 (Windows)
SDK Version
5.16.0
Self-Hosted Sentry Version
No response
Workload Versions
aspire 8.2.2/8.0.100 SDK 9.0.100
UseSentry or SentrySdk.Init call
builder.WebHost.UseSentry(options =>
{
//DSN from appsettings
options.UseOpenTelemetry();
options.SendDefaultPii = true;
options.MaxRequestBodySize = RequestSize.Always;
options.MinimumBreadcrumbLevel = LogLevel.Debug;
options.MinimumEventLevel = LogLevel.Error;
options.AttachStacktrace = true;
options.TracesSampleRate = 0.0;
options.Environment = builder.Environment.EnvironmentName;
options.AddExceptionFilterForType<OperationCanceledException>();
options.AddExceptionFilterForType<KustoClientRequestCanceledByUserException>();
});Steps to Reproduce
- Create a WebApplicationBuilder
- Add a hosted Service (BackgroundService) in our case
- Run that service in a loop (while (!stoppingToken.IsCancellationRequested)...)
- Watch the memory for a prolonged time
Expected Result
The memory keeps stable over a longer period
Actual Result
tldr:
memory spikes that results in System.OutOfMemory exeptions
Longer version:
We've introduced Sentry in 4 of our backend services a few weeks ago. Right after deploying those services to our Azure container background, we've noticed some OutOfMemory Exceptions on two of our services. Both services have BackgroundServices where the exceptions occur. Both BackgroundServices start a loop for the entire lifetime of the service, periodically checking with an Azure Queue Client for updates.
We've tried a few things, but in the end kinda resolved the issue with a manual GC.Collect() after each run of the loop and the memory level went back to normal (as you can see in the screenshot)
Metadata
Metadata
Assignees
Labels
Projects
Status
Status