-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests for #1007 #1117
tests for #1007 #1117
Conversation
Codecov Report
@@ Coverage Diff @@
## new-schedulers #1117 +/- ##
==================================================
+ Coverage 72.04% 74.81% +2.77%
==================================================
Files 158 158
Lines 12232 12241 +9
==================================================
+ Hits 8812 9158 +346
+ Misses 2924 2591 -333
+ Partials 496 492 -4
Continue to review full report at Codecov.
|
c42e218
to
93ff4da
Compare
var size int | ||
for i, stage := range varc.Stages { | ||
if stage.Duration.Duration == 0 && | ||
(i == 0 || | ||
(i >= 2 && varc.Stages[i-1].Target.Int64 != stage.Target.Int64 && | ||
varc.Stages[i-1].Target.Int64 == varc.Stages[i-2].Target.Int64)) { | ||
size++ | ||
} else if i == 0 || varc.Stages[i-1].Target.Int64 != stage.Target.Int64 { | ||
size += int(time.Duration(stage.Duration.Duration) / minIntervalBetweenRateAdjustments) | ||
} | ||
} | ||
|
||
rateChanges := make([]rateChange, 0, size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a hugely complicated piece of code for what (I think?) is essentially a very minor and ultra premature optimization...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I vote 👍 for dropping this. Now that I look at it again, I think it doesn't have a significant performance benefit, since you'd probably be saving some memory, but at the expense of actually calculating things almost twice - once here and once below...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added the benchmark results ;) 93ff4da
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't investigate the benchmark very thoroughly, but I find it a bit strange that this function takes up to a megabyte of RAM 😕 to execute...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw I am wondering wif the types.NullDuration
can't be just time.Duration
... but I will need to look at the code ... as this makes everything around this funkier to write ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, I intentionally used a NullDuration
, to make it apparent that the value could be 0
(given that we can't pass that zero value to time.NewTicker()
- see the comments in type rateChange struct {}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comparing the memory profile using the current tip as the base
(also some changes so that the lines stay the same):
flat flat% sum% cum cum%
-195.48MB 12.19% 12.19% -198.98MB 12.41% github.com/loadimpact/k6/lib/executor.VariableArrivalRateConfig.getPlannedRateChanges
-17MB 1.06% 13.25% -17MB 1.06% math/big.nat.make
13.50MB 0.84% 12.41% -2MB 0.12% math/big.scaleDenom
1.07MB 0.067% 12.34% 1.07MB 0.067% archive/zip.readDirectoryEnd
0 0% 12.34% 1.07MB 0.067% archive/zip.(*Reader).init
0 0% 12.34% 1.07MB 0.067% archive/zip.NewReader
(pprof) list getPlannedRateChanges
Total: 1.57GB
ROUTINE ======================== github.com/loadimpact/k6/lib/executor.VariableArrivalRateConfig.getPlannedRateChanges in /home/mstoykov/.gvm/pkgsets/go1.12.9/global/src/github.com/loadimpact/k6/lib/executor/var$
able_arrival_rate.go
-195.48MB -198.98MB (flat, cum) 12.41% of Total
. . 178: // numbers for scaling, then the instance executing the first segment won't
. . 179: // ever do even a single request, since scale(20%, 1) would be 0, whereas
. . 180: // the rational value for scale(20%, 1/sec) is 0.2/sec, or rather 1/5sec...
. . 181: currentRate := getScaledArrivalRate(segment, varc.StartRate.Int64, timeUnit)
. . 182:
69.15MB 69.15MB 183: rateChanges := make([]rateChange, 0, varc.calcSize())
. . 184: timeFromStart := time.Duration(0)
. . 185:
. . 186: var tArrivalRate = new(big.Rat)
. . 187: var tArrivalRateStep = new(big.Rat)
. . 188: var stepCoef = new(big.Rat)
. . 189: for _, stage := range varc.Stages {
. . 190: stageTargetRate := getScaledArrivalRate(segment, stage.Target.Int64, timeUnit)
. . 191: stageDuration := time.Duration(stage.Duration.Duration)
. . 192:
. 1MB 193: if currentRate.Cmp(stageTargetRate) == 0 {
. . 194: // We don't have to do anything but update the time offset
. . 195: // if the rate wasn't changed in this stage
. . 196: timeFromStart += stageDuration
. . 197: continue
/// a lot of nothing
. . 218: stepNumber := (stageDuration / minIntervalBetweenRateAdjustments)
. . 219: if stepNumber > 1 {
. -8B 220: rateDiff := new(big.Rat).Sub(stageTargetRate, currentRate)
. . 221: stepInterval := stageDuration / stepNumber
. . 222: for t := stepInterval; ; t += stepInterval {
. . 223: if stageDuration-t < minIntervalBetweenRateAdjustments {
. . 224: break
. . 225: }
. . 226:
. -16.50MB 227: tArrivalRate.Add(
. . 228: currentRate,
. 1MB 229: tArrivalRateStep.Mul(
. . 230: rateDiff,
. 8MB 231: stepCoef.SetFrac64(int64(t), int64(stageDuration)),
. . 232: ),
. . 233: )
. . 234:
-264.62MB -264.62MB 235: rateChanges = append(rateChanges, rateChange{
. . 236: timeOffset: timeFromStart + t,
. 3MB 237: tickerPeriod: getTickerPeriod(tArrivalRate),
. . 238: })
. . 239: }
. . 240: }
. . 241: timeFromStart += stageDuration
. . 242: rateChanges = append(rateChanges, rateChange{
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both benchmarks ran 1000 times so ... all the numbers should be divided by 1000 so ... around 260kb saved by ... probably not having to increase the underlying slice a bunch of times and than throwing it away as it still needs to grow ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, I intentionally used a
NullDuration
, to make it apparent that the value could be0
(given that we can't pass that zero value totime.NewTicker()
- see the comments intype rateChange struct {}
I saw the comment it's just that NullDuration
adds a boolean and in the same vein as this whole thread - this literally means that if we are not using it we will save 33.3% of the memory used for rateChange struct (time.Duraiton is an int64 and null.Duraiton is time.Duration with a bool).
Not a huge win but given that a similar thing can be accomplished with either a pointer to time.Duration or ... just you know knowing that it can't be 0 and checking for it where appropriate ...
require.Equal(t, big.NewRat(3, 1), nilEs.InPlaceScaleRat(threeRat)) | ||
require.Equal(t, big.NewRat(3, 1), threeRat) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think all of the tests here should be run with t.Parallel()
.
Also, one of the tests I planned to add here would have been the ultimate validation of #997. Basically, in a loop of say, 100 times, generate a random number of random fractions with a sum of 1
. Then, turn these into execution segments (checking for errors), and use the resulting segments to Scale()
another bunch of random integer numbers. Then we check if the sum of the scaled nubmers is equal to the starting number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also wanted to have all the tests (including all the executors) run through list of executionSegments. , maybe not random (although that sounds like a good idea). But decided to go for ... well ... writing some amount of tests for most of the code and hopefully getting some functions in place ready for doing bigger and more complicated tests. Also in the case of the executors the math started to get too hairy for me ;)
I would prefer if we merge this HUGE PR in the other HUGEST PR and than possibly add smaller and more targeted tests ... specifically so that reviewing those changes doesn't take a day ... or three ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides the super minor nits, this LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few questions and minor issues.
This doesn't include a single line that I am not sure it reachable at all. More tests can be added with better corner cases in the future
Also fix it to not emit two rate changes when there is 0 duration stage
The results are: benchmark old ns/op new ns/op delta BenchmarkGetPlannedRateChanges-4 4140227 2710011 -34.54% BenchmarkGetPlannedRateChanges-4 3531805 2397757 -32.11% BenchmarkGetPlannedRateChanges-4 3337196 2277776 -31.75% BenchmarkGetPlannedRateChanges-4 2984674 2192676 -26.54% BenchmarkGetPlannedRateChanges-4 2900849 2492187 -14.09% benchmark old allocs new allocs delta BenchmarkGetPlannedRateChanges-4 72977 56205 -22.98% BenchmarkGetPlannedRateChanges-4 72977 56205 -22.98% BenchmarkGetPlannedRateChanges-4 72977 56205 -22.98% BenchmarkGetPlannedRateChanges-4 72977 56205 -22.98% BenchmarkGetPlannedRateChanges-4 72977 56205 -22.98% benchmark old bytes new bytes delta BenchmarkGetPlannedRateChanges-4 1503784 1254588 -16.57% BenchmarkGetPlannedRateChanges-4 1503780 1254585 -16.57% BenchmarkGetPlannedRateChanges-4 1503780 1254588 -16.57% BenchmarkGetPlannedRateChanges-4 1503771 1254584 -16.57% BenchmarkGetPlannedRateChanges-4 1503775 1254588 -16.57%
This also gives some performance improvements: name old time/op new time/op delta GetPlannedRateChanges-4 1.72ms ± 6% 1.66ms ± 7% ~ (p=0.222 n=5+5) name old alloc/op new alloc/op delta GetPlannedRateChanges-4 1.25MB ± 0% 1.22MB ± 0% -2.93% (p=0.008 n=5+5) name old allocs/op new allocs/op delta GetPlannedRateChanges-4 56.2k ± 0% 51.6k ± 0% -8.16% (p=0.008 n=5+5)
name old time/op new time/op delta GetPlannedRateChanges-4 1.66ms ± 7% 1.34ms ± 2% -19.42% (p=0.008 n=5+5) name old alloc/op new alloc/op delta GetPlannedRateChanges-4 1.22MB ± 0% 0.86MB ± 0% -29.13% (p=0.008 n=5+5) name old allocs/op new allocs/op delta GetPlannedRateChanges-4 51.6k ± 0% 37.2k ± 0% -27.86% (p=0.008 n=5+5)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
git(hub) gripe: we have to figure out some saner way of changing PRs in response to comments, so that the changes are easy to review... With the current force-pushes you did, both the force-push diff and the "Viewed" file feature in the "Files changed" tab were absolutely useless to see what had actually changed 😞 ...
This needs some more thinking and investigation, but I'd be fine if we had a super-messy git history, including multiple nasty merge commits, until the very last moment, if that would make it easier to review how the code is changed over time. Then, after everyone has given 👍 for the PR contents, the PR author can spend some time and clean up the commits, rebase everything, etc. - I think in that situation the state of "Viewed" files won't reset... maybe...
I am pretty sure here the problem was that I rebased on top of the new 1007 and there ... well there are way too much changes ;) |
Add tests to 1007 and fix some minor issues here and there found while testing + a small performance increase