-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check retry option #178
Comments
Greetings! linkinator doesn't support this today, but adding it would not be terribly difficult. The underlying HTTP request library we use is gaxios, which natively supports retries! How would you imagine something like this working? A few questions!
|
Hello! If it is O.K., I also would like to share my preferences to the questions above. |
Of course! I'd love to learn more about what folks would like here. The implementation isn't hard, just lots of wiggle on exactly what we do |
Thank you. IMHO, I would like the following strategy regarding this issue.
5xx status codes with 429 (exception) for simplicity. Regarding 5xx status codes, in addition to 503 status code, 500, 502 and 504 codes are often temporary errors. For example, some sites respond with 500 status code when their middleware stack is overloaded. In order to avoid specifying and managing the status code one by one, I think we can handle all the 5xx status codes as the same server-side problem. Regarding 4xx status codes, unlike other status codes, 429 status code is dependent on when and how often we conduct requests. (c.f. #179) Above condition does include some false-positives, where some of the deterministically broken links are retried. However, I think we can prioritize simplicity and exhaustivity, assuming that the ratio of broken links to total links is very small and that we have to handle real sites.
Yes, exponential backoff, or more specifically, I think binary exponential backoff with jitter provides a good solution. The jitter part disperses the retries along the temporal axis, which is appropriate for mitigating error request serge from high concurrency.
Yes.
|
FYI there seems to be a new feature in a recent release that addresses this problem: #354 |
When checking massive amount links the chances that some link may sporadically fail is hight.
Does linkinator provides an option for re-checking (e.g. some sort of retry with random waiting) for the failing links?
The text was updated successfully, but these errors were encountered: