Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE REQUEST] Implementation of scan time limits per individual url when fuzzing in parallel #1070

Closed
NotoriousRebel opened this issue Jan 30, 2024 · 12 comments · Fixed by #1072
Labels
enhancement New feature or request

Comments

@NotoriousRebel
Copy link

Is your feature request related to a problem? Please describe.

For example let's say I do:

cat urls.txt | feroxbuster --stdin --parallel 4 --threads 6 -k --depth 1 --timeout 10 -L 4 -w wordlist.txt -o outfolder

In some cases what ends up happening is even though it's parallel for some urls it leads to a bottleneck, in which some urls in the scan are being fuzzed for 12 hours plus. This can bog down scan times tremendously especially if the urls list contains 100+ urls and multiple urls are causing a bottleneck where they have been stuck scanning for 12 hours+.

Describe the solution you'd like
A new flag maybe along the lines of --individual-time-limit or --url-time-limit, or whatever name makes the most sense. What this flag does is when running in parallel it tracks the time each individual url has been being fuzzed and if it exceeds the number set by the flag it gracefully stops the scan and moves onto the next url in the file.

Describe alternatives you've considered

As an alternative what I have had to do is monitor with doing ps aux | grep ferox and writing down which urls are currently running then check throughout the day and if they are still running for an egregious amount of time I do kill -9 PID. This is extremely inefficient and has led to scans taking days when it should be much shorter.

Additional context
For some context some cases in which urls can cause a bottleneck that I've seen so far are a redirect to a login page and a url that has a 504 timeout page. I have not been keeping track of other cases but as I disccover them I will edit this issue as needed.

@NotoriousRebel NotoriousRebel added the enhancement New feature or request label Jan 30, 2024
@epi052
Copy link
Owner

epi052 commented Jan 30, 2024

Howdy!

Does --time-limit satisfy this? I'm guessing not but want to make sure

@NotoriousRebel NotoriousRebel changed the title [FEATURE REQUEST] Implementation of scan time limits per individuaal url when fuzzing in parallel [FEATURE REQUEST] Implementation of scan time limits per individual url when fuzzing in parallel Jan 30, 2024
@NotoriousRebel
Copy link
Author

Thanks for the prompt response, I do not believe so as --time-limit would cause the entire scan to stop if a bottleneck occurs which wouldn't be ideal.

@epi052
Copy link
Owner

epi052 commented Feb 1, 2024

i didn't think it was what you were after, but had to ask, lol.

it's a good suggestion; ill take a look and see how feasible / required effort this is and report back

@epi052
Copy link
Owner

epi052 commented Feb 1, 2024

also, just for clarification, you're asking for limits placed on an instance of feroxbuster (i.e. one of the parallel processes; at the URL level)? NOT time limits on each individual scan (folder level; each progress bar in a normal scan)

@epi052
Copy link
Owner

epi052 commented Feb 1, 2024

choose carefully 😅 the URL level looks pretty easy to implement. haven't explored per-directory yet

@NotoriousRebel
Copy link
Author

Correct for the former just limits at the URL level

@epi052
Copy link
Owner

epi052 commented Feb 1, 2024

Ok, I think that's a pretty simple fix tbh, I'll play with it this evening or tomorrow and see if my thought works out

@epi052
Copy link
Owner

epi052 commented Feb 2, 2024

pick up a new build from this pipeline and give it a shot; lmk how it goes

https://github.com/epi052/feroxbuster/actions/runs/7755538303

@NotoriousRebel
Copy link
Author

Just tested it and even threw in one of the subdomains that is just a redirect to a login page seems to work flawlessly :), cat test.txt | ./testferoxbuster --time-limit 5m --parallel 4 --stdin --threads 6 -L 4 -w wordlist.txt -o test_feroxbuster_limit --json Have you had similar results in your testing?

@epi052
Copy link
Owner

epi052 commented Feb 2, 2024

Glad to hear it!

Yea, it seemed to work afaict, but I strongly prefer the ticket creator to give fixes a run against their targets, since they're more familiar with what a solution should look like (from a user perspective).

Thanks for checking! I'll get this merged in sometime soon

@epi052
Copy link
Owner

epi052 commented Feb 28, 2024

@all-contributors add @NotoriousRebel for ideas

Copy link
Contributor

@epi052

I've put up a pull request to add @NotoriousRebel! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants