Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce the logging #2031

Closed
heaven opened this issue Apr 10, 2023 · 15 comments · Fixed by #2216
Closed

Reduce the logging #2031

heaven opened this issue Apr 10, 2023 · 15 comments · Fixed by #2216
Assignees

Comments

@heaven
Copy link

heaven commented Apr 10, 2023

Issue Description

Having this in my config isn't sufficient

config.logger.level = Logger::WARN

We are still getting a lot of spam messages about rate limits, transport, envelopes, traces (which we don't even have in our subscription plan), and a lot of other stuff that makes no sense to us and there is nothing we can do about it. So we basically don't need all those messages that we can't resolve.

Reproduction Steps

Just enable Sentry in the app and observe there's a lot of stuff in the log.

Expected Behavior

Ability to set the log level to something that'd only show us real warnings and problems we can resolve (like without upgrading our plan).

Actual Behavior

Changing the log level doesn't reduce the logging

Ruby Version

2.7.6

SDK Version

5.7.0 & 5.8.0

Integration and Its Version

No response

Sentry Config

No response

@krainboltgreene
Copy link

Yeah this would be sick to have, a good 40% of our logs are envelope sending.

@sl0thentr0py
Copy link
Member

yea this is a very fair point, it's just a result of cruft growing over time in the source.
I'll try to cleanup all the logging when I have some spare time.

@st0012
Copy link
Collaborator

st0012 commented Aug 6, 2023

@heaven @krainboltgreene Can you provide some example logs you see? I agree that some messages' log level should be lowered, but I couldn't reproduce the behaviour you mentioned.

For example, while the default log does contain these envelope logs:

[Transport] Sending envelope with items [event] 70f1b0d8dee84527aab907c3b1f3c388 to Sentry
[Transport] Sending envelope with items [transaction] aca1bf620f4d4c7c9391ab0fb97df261 to Sentry

Once I added config.logger.level = Logger::WARN to config/initializers/sentry.rb (assume you're using a Rails app too), they disappear.

If you still see them with the same config, there could be some initialisation/configuration issues that caused the logger level not being picked up properly. And having some log examples will help us identify the issue, thanks.

@heaven
Copy link
Author

heaven commented Aug 6, 2023

@st0012 That changes the log level for the entire application. But even though I don't see any logs at all in this case, I can still see these:
Screenshot 2023-08-06 at 19 05 34

Sidekiq's logs are also full of these messages:

2023-08-06T17:03:43.183Z pid=81020 tid=32g4 INFO: [Transport] Envelope item [transaction] not sent: rate limiting

This application I'm working with is on Rails 5.2.

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 Aug 6, 2023
@st0012
Copy link
Collaborator

st0012 commented Aug 6, 2023

@heaven Thanks for the quick response!

That changes the log level for the entire application.

Yeah I noticed this as well and opened #2086 for it.

Wrt the 429 messages, I think if errors are from Sentry (e.g. rate limited), the log should be at info level.

@sl0thentr0py Do you know what are possible Sentry error headers? Will they be important for users to report issues or tweak configs?

if response.code == "429"
handle_rate_limited_response(response)
else
error_info += "\nbody: #{response.body}"
error_info += " Error in headers is: #{response['x-sentry-error']}" if response['x-sentry-error']
end

@heaven
Copy link
Author

heaven commented Aug 6, 2023

Wrt the 429 messages, I think if errors are from Sentry (e.g. rate limited), the log should be at info level.

We can't reduce the rate and can't upgrade. So for us, these messages are just blowing up our logs and that's it.

UPD. Our plan doesn't include transaction logging so the server responds with 429 errors all the time.

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 Aug 6, 2023
@st0012
Copy link
Collaborator

st0012 commented Aug 6, 2023

Our plan doesn't include transaction logging so the server responds with 429 errors all the time.

Can you provide your SDK config (with sensitive data removed)? I think you may accidentally turned on tracing.

@sl0thentr0py
Copy link
Member

sl0thentr0py commented Aug 9, 2023

@st0012 imo, users don't need to know this much detail in logs, we record Client Reports anyway for rate limits and support will check those stats in case of rate limited users.

@heaven
Copy link
Author

heaven commented Aug 9, 2023

Our plan doesn't include transaction logging so the server responds with 429 errors all the time.

Can you provide your SDK config (with sensitive data removed)? I think you may accidentally turned on tracing.

Whoops, sorry, I thought I did. Here it is:

if ENV['SENTRY_DSN']
  Sentry.init do |config|
    # config.environment = 'production'
    # config.logger.level = Logger::WARN
    config.dsn = ENV['SENTRY_DSN']
    config.breadcrumbs_logger = [:active_support_logger, :http_logger]

    # To activate performance monitoring, set one of these options.
    # This feature is unavailable in our plan, disable entirely in production.
    config.traces_sample_rate = Rails.env.production? ? 0 : 1

    config.sample_rate = 1.0  # send 100% of events
    config.send_default_pii = true # send user sensitive info like ip, cookies,request body, query string in the url

    # sentry-ruby sends events asynchronously with its own background workers
    # the default number of workers equals to your machine's processor count
    # you can adjust the number with
    config.background_worker_threads = 1
    config.context_lines = 3

    filter = ActionDispatch::Http::ParameterFilter.new(Rails.application.config.filter_parameters)

    config.before_send = lambda do |event, hint|
      event_hash = event.to_hash
      production = event.environment == 'production'

      filter.filter(event_hash) if production

      if event.exception
        msg = ""

        event.exception.values.each do |e|
          msg << "\n#{e.type}: #{e.value}"

          # Do not print stacktrace in production
          next if production or not e.stacktrace

          msg << "\n"
          msg << e.stacktrace.frames.map { |f| "#{f.abs_path}:#{f.lineno}:in `#{f.function}'" }.reverse.join("\n")
          msg << "\n"
        end

        Rails.logger.error(msg)
      else
        Rails.logger.warn("#{Sentry::Event.get_log_message(event_hash)}: #{event.extra}")
      end

      event if production
    end
  end
end

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 Aug 9, 2023
@wengzilla
Copy link

Are there any suggestions for reducing the 429 error messages? We're having the same problems here and we'd like to just filter them out as we have no intention to further upgrade.

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 Dec 16, 2023
@sysk472
Copy link

sysk472 commented Dec 18, 2023

Have the same problem.

@salrepe
Copy link

salrepe commented May 29, 2024

Hi Sentry Team,

I hope this message finds you well. I know this issue is closed, but before opening a new one, I wanted to check with you regarding a specific log line that we frequently see in our logs:

[Transport] Sending envelope with items [sessions] to Sentry

You can find the log line in question here:
https://github.com/getsentry/sentry-ruby/blob/41ab3bab04fdb7d41622799f3260ecff778cfc45/sentry-ruby/lib/sentry/transport.rb#L64C31-L64C58

We have several instances of this log entry appearing across some of our projects. Given its frequency, we were wondering if it would make sense to consider changing its log level to a different level other than 'info'. This adjustment could help reduce noise in our logs and make it easier to focus on more critical issues.

Could you please let us know your thoughts on this?

@sl0thentr0py
Copy link
Member

@salrepe that should already be info, can you send me your sentry initializer config?

@salrepe
Copy link

salrepe commented May 29, 2024

@sl0thentr0py,

Apologies for the confusion earlier. We actually wanted to check if this log line could be assigned a different log level other than 'info'.

Would it make sense to change its log level? What value does this log entry add to applications?

We probably configure to increase the log level in our side for Sentry (to Warn) but curious about that concrete log.

Thank you!

@sl0thentr0py
Copy link
Member

right yea makes sense, but I still think we want to log errors/transactions as info but lower the other more high volume ones (metrics/sessions) as debug. I'll change it soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

8 participants