-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support thread group #51
Comments
@BusyJay nice! A bunch of high-level questions:
|
I'm not sure. I find it more convenient to just register current thread instead of others. For example, thread pool generally provides
When evaluating, it uses current thread ID to find out which thread group it belongs to, and then check the rules of the group. |
It seems complex. Why can not we launch multiple process and each process runs some test with a single thread? |
Process isn't compatible with the default test runner. |
Is your feature request related to a problem? Please describe.
fail-rs utilizes global registry to expose simple APIs and convenient FailPoint definition. But it also means all parallel tests have to be run one by one and do cleanup between each run to avoid configurations affect each other.
Describe the solution you'd like
This issue proposes to utilize thread group. Each test case defines a unique thread group, all configuration will be bound to exact one thread group. Every time a new thread is spawn, it needs to be registered to one thread group to make FailPoint reads configurations. If a thread is not registered to any group, it belongs to a default global group.
New public APIs include:
Note that it doesn't require users have the ability to spawn a thread, register the thread before using FailPoint is enough.
Describe alternatives you've considered
One solution to this is pass the global registry to struct constructor, but it will interfere the general code heavily, it needs to be passed to anywhere FailPoints are defined.
Another solution is #24, but it lacks threaded cases support.
The text was updated successfully, but these errors were encountered: