-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
..... #16
Comments
I would recommend using the RedisUrlList and running the crawler in a separate process that you can kill/resume as and when necessary. Using the With the RedisUrlList/DbUrlList, Supercrawler is designed to work in a distributed way using Redis to store the crawl state (and locks when a new page crawl is initiated). Hence, you can simply kill/start processes as necessary and Supercrawler will cope with this. |
Brendon once again.. you are a rock-star. thanks |
hi Brendon!
thanks for making this!
what is the best way to make a Crawl that is in-progress, PAUSE?
...and then (when the user decides ) to Continue from exactly where it left off?
cheers!!
The text was updated successfully, but these errors were encountered: