You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to measure how fast unblob as a whole can operate and what strategy can speed up extraction significantly.
Example question we want to answer: Which is faster? Matching on all YARA patterns at once or iterating on the file multiple times with less patterns?
Measure different scenarios:
One big file with few smaller files inside
Lots of small files concatenated and inside
Multiple big files concatenated and inside
Refact the priority handling by concatenating all YARA rules and handle the match results by priority instead of scanning a file multiple times. Measure the difference on various files.
The text was updated successfully, but these errors were encountered:
There is pytest-benchmark which we can use to write benchmark tests with some special markers which would be ignored by default but can be easily selected to run.
We need to measure how fast unblob as a whole can operate and what strategy can speed up extraction significantly.
Example question we want to answer: Which is faster? Matching on all YARA patterns at once or iterating on the file multiple times with less patterns?
Measure different scenarios:
The text was updated successfully, but these errors were encountered: