-
-
Notifications
You must be signed in to change notification settings - Fork 152
Finding recap.email errors with deleted Sentry issues
Sentry seems to keep events at most 90 days since they happened. So, if we did not check them, we only have the sentry-bot issue created in Github with a few lines of traceback, and a creation datetime.
Events are recorded into Sentry with at most a few seconds in latency. Then, they are reported to Github within minutes to hours
-
Took 5 minutes; this Sentry Issue was created an Dec 6, 2023 - 7.49 PM UTC. The corresponding Github Issue was created on Dec 6, 2023 7.54 PM UTC
-
Took 3 hours: this Sentry Issue was created on Dec 7, 2023 2:28:56 PM UTC. The corresponding Github Issue was created on Dec 7, 2023, 5.43 PM UTC
So, take into account a window of some hours when considering the possible erroring emails
Now, we can use the API to look for emails that may have triggered the error. For example:
First, use the date_created
filters. Then use status!=2
to get the emails that were not processed successfully. There are more filters available, do an OPTIONS
request to the API endpoint. Note that they use Django filtering syntax, so it useful to know it.
Usually, this will be enough to shortlist a few emails to try and run the top level function that will make then error.
Be sure to notice the UTC offset in Github and in the recap.email objects, it may be different! The Github interface will show the UTC offset of your IP address
Besides the original RECAP email, we can also process documents attached or mentioned in the email. These will produce their own queues. We can follow a similar process to catch them, they just have a different API endpoint