You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 1, 2023. It is now read-only.
I followed the instructions in the Getting Started guide and it turns out that the log analytics cost exceeds the cost of the VMs after running for 24 hours as shown below:
I am not sure if this is expected, but I don't think it is great having to pay for logs much more than the computational cost itself. My rough understanding is that the agent is reporting everything in realtime which would produce a huge amount of logs to process. Would it be possible to somehow reduce the logging cost by introducing some delays between reports?
To me, realtime stats are good when we're fuzzing something that requires monitoring to improve our harnesses and corpus. However, there are cases that we need to run it for days if not weeks where we don't expect any crash to occur, just use it as some kind of a baseline. Logging too much would create some unnecessary cost.
The text was updated successfully, but these errors were encountered:
1.4.0 implemented service level application insights sampling after 20 events per second. With these issues, I consider this issue resolved. If this issue persists for you after upgrading to 1.4.00 or later, please re-open this issue.
Information
Details
I followed the instructions in the Getting Started guide and it turns out that the log analytics cost exceeds the cost of the VMs after running for 24 hours as shown below:
I am not sure if this is expected, but I don't think it is great having to pay for logs much more than the computational cost itself. My rough understanding is that the agent is reporting everything in realtime which would produce a huge amount of logs to process. Would it be possible to somehow reduce the logging cost by introducing some delays between reports?
To me, realtime stats are good when we're fuzzing something that requires monitoring to improve our harnesses and corpus. However, there are cases that we need to run it for days if not weeks where we don't expect any crash to occur, just use it as some kind of a baseline. Logging too much would create some unnecessary cost.
The text was updated successfully, but these errors were encountered: