You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Run bandit on any of the files attached in Examples.zip
Notice how the run time increases exponentially (user time approximately quadruples as the number of strings in the set doubles.)
(This also occurs if the large sequence of short strings is a list rather than a set.)
Python2 versus Python 3: Though the latter runs a little faster overall, the exponential nature of this problem is still evident.
Expected behavior
The run time is linear despite the extra-long data.
Additionally, it'd be useful to be able to see exactly what file is being processed to locate such bottlenecks, versus the 242 [0.. 50.. 100.. 150.. output. Debug output is far too noisy for this purpose when scoping hundreds of files.
Bandit version
For Python 2:
bandit 1.5.1
python version = 2.7.15 (default, Jan 12 2019, 21:07:57) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)]
For Python3 (slightly faster):
bandit 1.5.1
python version = 3.6.8 (default, Jan 25 2019, 14:34:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)]
Additional context
n/a
The text was updated successfully, but these errors were encountered:
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Run bandit on any of the files attached in Examples.zip
Notice how the run time increases exponentially (user time approximately quadruples as the number of strings in the set doubles.)
(This also occurs if the large sequence of short strings is a list rather than a set.)
Python2 versus Python 3: Though the latter runs a little faster overall, the exponential nature of this problem is still evident.
Expected behavior
The run time is linear despite the extra-long data.
Additionally, it'd be useful to be able to see exactly what file is being processed to locate such bottlenecks, versus the
242 [0.. 50.. 100.. 150..
output. Debug output is far too noisy for this purpose when scoping hundreds of files.Bandit version
For Python 2:
For Python3 (slightly faster):
Additional context
n/a
The text was updated successfully, but these errors were encountered: