-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report sizes seem to increase over days after a restart #3576
Comments
Two things I realized by trying to download some older reports with the help of #3581 in Weave Cloud:
With these observations in mind, it seems quite hard and unreliable to analyze historic reports through Scope as we might a bug or a few with the historic lookup. @bboreham Am I missing something? Are any of the above issues already known? If not, I'd just file new issues. I imagine it would be hard to tackle this issue in the meantime as my understanding is that S3 only stores parts of reports and collecting and merging them manually at a given timestamp would be quite impossible. |
For the second point, are you sure you're looking at reports from the same node? |
Maybe you are looking at the merged report for the whole instance? What we store in S3 is the exact data that came in from each probe; it's a part of the whole instance, but it's a better place to start because it separates any issues in the probe from issues in the merging process. |
Another idea would be to use |
We got another drop in report sizes after the release of 1.11 3 days ago. |
So that didn't fix it. I added a log line to
Here is the normal sequence of events:
But sometimes we miss the
Sometimes we lose both
and here's a normal one again:
|
Noticed in data from a lot of probes - this is the average report size over the last 30 days:
The two drops are at points where all probes were restarted.
Pattern seems to suggest the probes are accumulating stuff in data structures which does not come back on restart, and therefore probably unnecessary.
The text was updated successfully, but these errors were encountered: