-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[🚀 Feature]: Automatically drain node after n-failed session attempts #13865
Comments
@krishtoautomate, thank you for creating this issue. We will troubleshoot it as soon as we can. Info for maintainersTriage this issue by using labels.
If information is missing, add a helpful comment and then
If the issue is a question, add the
If the issue is valid but there is no time to troubleshoot it, consider adding the
If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C),
add the applicable
After troubleshooting the issue, please add the Thank you! |
Why is the Docker node failing? How are you monitoring that? Can you share more details? It sounds like we want to fix something with a workaround instead of a proper fix. |
Failed Reasons can be any, proxy or internet or chrome crashing or not launching due to driver mismatch. Provided as an example to explain use case. Node can be appium node as well. |
But that is a Node misconfiguration. Infrastructure needs to be tested before being made available for use. Implementing something like this would hide issues. It is an incomplete workaround. |
if we have too many nodes, sometimes our nodes might fall behind for different reasons and we dont want our tests to impact due to nodes which are not working. is their any way this feature can be implemented with some cli argument to make it active and default as off |
If you have too many nodes, how are you monitoring them? Why would there be a driver mismatch? Are you not testing the changes before sharing the modified infrastructure with the rest of the users? If a Node is not working, testing the changes done to it should alert you even before you run any regular tests. Suppose this feature request is implemented, and you have a driver mismatch or network issues with your nodes. What ends up happening is that all of them get shut down. How would you diagnose the actual problem? The result is that the infrastructure went down, and you need to figure out why. |
This issue was closed because we did not receive any additional information after 14 days. |
This issue has been automatically locked since there has not been any recent activity since it was closed. Please open a new issue for related bugs. |
Feature and motivation
User might have n number of nodes and if sessions are randomly allocated during the tests and are failing on few nodes due to issue with docker node. It will be great if this feature is implemented: Automatically drain node after n-failed session attempts.
Usage example
when new RemoteWebDriver session is creating on n-number of nodes with 100-200 test cases in parallel, if I have issues with any docker nodes, those nodes will drain automatically after n(5-10) failed session attempts. the total impact on test execution is less and I end up re-executing only few tests which failed initially.
The text was updated successfully, but these errors were encountered: