-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove the core.Engine
🎉
#2815
Remove the core.Engine
🎉
#2815
Conversation
// We call waitOutputsFlushed() below because the threshold calculations | ||
// need all of the metrics to be sent to the MetricsEngine before we can | ||
// calculate them one last time. We need the threshold calculated here, | ||
// since they may change the run status for the outputs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead, to do this by the output manager, could we do an explicit call to the ingester? I think it would simplify a bit the logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, maybe, not sure it would be simpler though 🤔 here in the defer
statements, we just have the following sequence after the test run finishes:
- close the
samples
channel - wait for all outputs to finish flushing their metrics
- finalize the thresholds (unless
--no-thresholds
was used) - stop all outputs
to me, this makes sense even if we completely ignore thresholds 🤔 and I am not sure how we can simplify it, when the piping of the metrics also happens by the output.Manager
🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not blocking point: I think it could be helpful to add these 4 points to the comment so we have one central place where we can read the flow without going across the entire code of the Run method. I think it could be helpful when we will work on #2430.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍, but let's add this in the PR that converts the few remaining tests for the old Engine, I'll merge this one shortly
195c2ab
to
818cee6
Compare
3dbeb3b
to
8324c42
Compare
Codecov Report
@@ Coverage Diff @@
## master #2815 +/- ##
==========================================
+ Coverage 76.81% 76.84% +0.03%
==========================================
Files 225 222 -3
Lines 16867 16846 -21
==========================================
- Hits 12956 12945 -11
+ Misses 3078 3065 -13
- Partials 833 836 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
818cee6
to
d24bcc2
Compare
4dd18e5
to
2e4f071
Compare
30908a1
to
cb17842
Compare
3c00d58
to
6f6b105
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
c234955
to
d0eb9d3
Compare
9fd527b
to
63ae6b0
Compare
16c82bc
to
ee62441
Compare
This is the first commit from #2815, plus an extra test that checks the exit code when aborting a `k6 run --paused` test with Ctrl+C. Unfortunately, it turned out that 19dd505 from #2813 introduced a regression. Interrupting the k6 process with a signal should result in a `105` exit code (`ExternalAbort`), but with that commit k6 exited with `103` if k6 was `--paused`. The good news is that the fix was already done in the first commit of #2815, which was entirely self-sufficient and can be moved to this separate PR. As a bonus, this makes reviewing everything even easier... So, some details about the commit itself... In short, there is no need to have 2 separate `Run()` and `Init()` methods in the `execution.Scheduler`. Instead, Run() can start the VU initialization itself. As a bonus, this immediately makes the error handling around init errors much more in line with other error handling, allowing us to resolve the bug, but also to respect `--linger` and to try and execute `handleSummary()` if there were problems. Except the first init that is used to get the exported options, of course, that one is still special.
ee62441
to
943b1e7
Compare
e87ee32
to
27b1a4d
Compare
This reverts a big part of #2885 because with all of the logic in Run(), k6 didn't know if it needed to wait when --linger was specified. In general, if a script error or test.abort() occurs during init, --linger should not apply and k6 should exit immediately. But if test.abort() is called during the test run itself, or the test was stopped in some other way besides Ctrl+C, --linger means that k6 should not exit immediately after stopping the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR builds on top of #2813 and closes #1889, finally 🎉 🎊 It also leaves us in a much better position to implement #1342, #140, #2804, #2430, and a myriad of other issues 🎉
Some of the less important tests still need to be ported, so it it still technically a draft PR that's not quite ready to merge as it is. Still, given the fact that the new integration tests all work fine, I am fairly confident there aren't any major problems 🤞 I also don't intend to make any big changes to the code here, so it's more than ready for review.