-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add random scaling tests, original #997 scaling algorithm #1339
Conversation
becf91a
to
5ec4714
Compare
It has some new handy assertions like Greater(), and the previous version was 2 years old.
This frequently fails on the current implementation.
5ec4714
to
c54a7bc
Compare
777884d
to
2128816
Compare
Codecov Report
@@ Coverage Diff @@
## new-schedulers #1339 +/- ##
==================================================
- Coverage 76.78% 76.69% -0.09%
==================================================
Files 160 160
Lines 12511 12523 +12
==================================================
- Hits 9606 9604 -2
- Misses 2396 2410 +14
Partials 509 509
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really don't like the upgrade of the require lib in this PR just to get a small helper function which can be emulated with require.True
.
I also happened to copy paste your tests in my #1323 and intended on doing all of the refactorings I propose here , so if we agree on those maybe just drop the testing and benchmarks?
Or maybe wait until we are done with the other PR and then we will add this on top as I don't think we actually will use it anywhere ...
var i, lastResult int64 | ||
for i = 1; i < 1000; i++ { | ||
result := es.Scale(i) | ||
require.GreaterOrEqual(t, result, lastResult) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think upgrading the lib for this is worth it ... especially in this commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the previous version is 2 years old at this point, and while I agree that we could use require.True
for these cases, it makes sense to use a native API if it's already implemented in a version we should upgrade to anyway. :)
especially in this commit
In this PR, you mean? I needed it here, that's why, though let me know if you'd prefer a separate PR for it.
Why are you against upgrading stretchr/testify
?
@na--: thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really mind, especially considering that it's only used in our tests. I think we upgrade dependencies far to rarely anyway...
} | ||
|
||
func TestMain(m *testing.M) { | ||
rand.Seed(time.Now().UnixNano()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do prefer if we at least log it somehow so we can at least see it in case it produces an error but just in general it doesn't
I would also really prefer if we actually don't use TestMain
, but instead each test has it's own source as I've done in https://github.com/loadimpact/k6/blob/af4321099e8f65f9279e92c8ec46a6f7d3c1cc29/lib/execution_segment_test.go#L493-L494 maybe we can make it use time.Now()
for tests and keep it using a constant for the benchmarks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do prefer if we at least log it somehow so we can at least see it in case it produces an error but just in general it doesn't
You mean log the chosen seed? Sure, I'll add it.
I would also really prefer if we actually don't use
TestMain
, but instead each test has it's own source as I've done in
What would be the reason for that? In this case neither test "cares" about its random source, it just needs it to use a different seed on each run, so I don't see what we would gain by initializing a new source separately. After all, tests that need a constant seed can initialize their own and not use the global source, like your example.
maybe we can make it use
time.Now()
for tests and keep it using a constant for the benchmarks?
The benchmark here doesn't depend on rand
, but yeah, benchmarks that do should use a constant seed.
Yeah, I guess there's not much value in merging this then. We can close the PR and leave the branch for reference if you want. Let me know. |
I think merging |
Closing this since most of it is irrelevant now, except for |
This adds the original scaling algorithm described in #997 as a separate
ExecutionSegment.ScaleRemainder
function, and adds a couple of random scaling tests, one of which is disabled until the implementation is fixed.It also updates
stretchr/testify
in order to use the newGreater*
assertions.This effort is done to determine whether the previous algorithm behaves better when scaling segments, in order to fix #1296. Result: no, it doesn't, but we decided to leave it in for future reference, and will probably delete it later.