-
-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE REQUEST] Implementation of scan time limits per individual url when fuzzing in parallel #1070
Comments
Howdy! Does --time-limit satisfy this? I'm guessing not but want to make sure |
Thanks for the prompt response, I do not believe so as |
i didn't think it was what you were after, but had to ask, lol. it's a good suggestion; ill take a look and see how feasible / required effort this is and report back |
also, just for clarification, you're asking for limits placed on an instance of feroxbuster (i.e. one of the parallel processes; at the URL level)? NOT time limits on each individual scan (folder level; each progress bar in a normal scan) |
choose carefully 😅 the URL level looks pretty easy to implement. haven't explored per-directory yet |
Correct for the former just limits at the URL level |
Ok, I think that's a pretty simple fix tbh, I'll play with it this evening or tomorrow and see if my thought works out |
pick up a new build from this pipeline and give it a shot; lmk how it goes https://github.com/epi052/feroxbuster/actions/runs/7755538303 |
Just tested it and even threw in one of the subdomains that is just a redirect to a login page seems to work flawlessly :), |
Glad to hear it! Yea, it seemed to work afaict, but I strongly prefer the ticket creator to give fixes a run against their targets, since they're more familiar with what a solution should look like (from a user perspective). Thanks for checking! I'll get this merged in sometime soon |
@all-contributors add @NotoriousRebel for ideas |
I've put up a pull request to add @NotoriousRebel! 🎉 |
Is your feature request related to a problem? Please describe.
For example let's say I do:
cat urls.txt | feroxbuster --stdin --parallel 4 --threads 6 -k --depth 1 --timeout 10 -L 4 -w wordlist.txt -o outfolder
In some cases what ends up happening is even though it's parallel for some urls it leads to a bottleneck, in which some urls in the scan are being fuzzed for 12 hours plus. This can bog down scan times tremendously especially if the urls list contains 100+ urls and multiple urls are causing a bottleneck where they have been stuck scanning for 12 hours+.
Describe the solution you'd like
A new flag maybe along the lines of
--individual-time-limit
or--url-time-limit
, or whatever name makes the most sense. What this flag does is when running in parallel it tracks the time each individual url has been being fuzzed and if it exceeds the number set by the flag it gracefully stops the scan and moves onto the next url in the file.Describe alternatives you've considered
As an alternative what I have had to do is monitor with doing
ps aux | grep ferox
and writing down which urls are currently running then check throughout the day and if they are still running for an egregious amount of time I dokill -9 PID
. This is extremely inefficient and has led to scans taking days when it should be much shorter.Additional context
For some context some cases in which urls can cause a bottleneck that I've seen so far are a redirect to a login page and a url that has a 504 timeout page. I have not been keeping track of other cases but as I disccover them I will edit this issue as needed.
The text was updated successfully, but these errors were encountered: