Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests for #1007 #1117

Merged
merged 11 commits into from
Nov 21, 2019
Merged

tests for #1007 #1117

merged 11 commits into from
Nov 21, 2019

Conversation

mstoykov
Copy link
Contributor

@mstoykov mstoykov commented Aug 22, 2019

Add tests to 1007 and fix some minor issues here and there found while testing + a small performance increase

@mstoykov mstoykov added the tests label Aug 22, 2019
@codecov-io
Copy link

codecov-io commented Aug 22, 2019

Codecov Report

Merging #1117 into new-schedulers will increase coverage by 2.77%.
The diff coverage is 100%.

Impacted file tree graph

@@                Coverage Diff                 @@
##           new-schedulers    #1117      +/-   ##
==================================================
+ Coverage           72.04%   74.81%   +2.77%     
==================================================
  Files                 158      158              
  Lines               12232    12241       +9     
==================================================
+ Hits                 8812     9158     +346     
+ Misses               2924     2591     -333     
+ Partials              496      492       -4
Impacted Files Coverage Δ
lib/execution.go 91.26% <100%> (+21.42%) ⬆️
lib/executor/helpers.go 96% <100%> (+6.66%) ⬆️
lib/executor/variable_arrival_rate.go 97.01% <100%> (+71.49%) ⬆️
lib/executor/vu_handle.go 92.95% <0%> (-4.23%) ⬇️
stats/cloud/collector.go 79.62% <0%> (+0.62%) ⬆️
lib/helpers.go 100% <0%> (+4.16%) ⬆️
core/local/local.go 69.3% <0%> (+6.93%) ⬆️
api/v1/group.go 90.9% <0%> (+19.48%) ⬆️
lib/executor/constant_arrival_rate.go 96.85% <0%> (+50.39%) ⬆️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6106924...ba5f50b. Read the comment docs.

lib/execution_segment_test.go Outdated Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Outdated Show resolved Hide resolved
lib/execution_test.go Outdated Show resolved Hide resolved
lib/execution_test.go Outdated Show resolved Hide resolved
lib/execution_test.go Outdated Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Outdated Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Show resolved Hide resolved
lib/execution_test.go Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Show resolved Hide resolved
@mstoykov mstoykov force-pushed the pr1007 branch 2 times, most recently from c42e218 to 93ff4da Compare October 11, 2019 07:08
@mstoykov mstoykov changed the title [WIP] tests for #1007 tests for #1007 Oct 11, 2019
@mstoykov mstoykov requested review from na--, cuonglm and imiric October 11, 2019 08:30
@mstoykov mstoykov marked this pull request as ready for review October 11, 2019 08:30
Comment on lines 181 to 193
var size int
for i, stage := range varc.Stages {
if stage.Duration.Duration == 0 &&
(i == 0 ||
(i >= 2 && varc.Stages[i-1].Target.Int64 != stage.Target.Int64 &&
varc.Stages[i-1].Target.Int64 == varc.Stages[i-2].Target.Int64)) {
size++
} else if i == 0 || varc.Stages[i-1].Target.Int64 != stage.Target.Int64 {
size += int(time.Duration(stage.Duration.Duration) / minIntervalBetweenRateAdjustments)
}
}

rateChanges := make([]rateChange, 0, size)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a hugely complicated piece of code for what (I think?) is essentially a very minor and ultra premature optimization...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do agree that it grew much harrier than it was originally (I think I've squashed those commits) and the performance benefit turned out to not be as big as expected ;(
Do you want me to drop it ?
cc @imiric , @cuonglm

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I vote 👍 for dropping this. Now that I look at it again, I think it doesn't have a significant performance benefit, since you'd probably be saving some memory, but at the expense of actually calculating things almost twice - once here and once below...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added the benchmark results ;) 93ff4da

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't investigate the benchmark very thoroughly, but I find it a bit strange that this function takes up to a megabyte of RAM 😕 to execute...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw I am wondering wif the types.NullDuration can't be just time.Duration ... but I will need to look at the code ... as this makes everything around this funkier to write ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, I intentionally used a NullDuration, to make it apparent that the value could be 0 (given that we can't pass that zero value to time.NewTicker() - see the comments in type rateChange struct {}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comparing the memory profile using the current tip as the base (also some changes so that the lines stay the same):

      flat  flat%   sum%        cum   cum%
 -195.48MB 12.19% 12.19%  -198.98MB 12.41%  github.com/loadimpact/k6/lib/executor.VariableArrivalRateConfig.getPlannedRateChanges
     -17MB  1.06% 13.25%      -17MB  1.06%  math/big.nat.make
   13.50MB  0.84% 12.41%       -2MB  0.12%  math/big.scaleDenom
    1.07MB 0.067% 12.34%     1.07MB 0.067%  archive/zip.readDirectoryEnd
         0     0% 12.34%     1.07MB 0.067%  archive/zip.(*Reader).init
         0     0% 12.34%     1.07MB 0.067%  archive/zip.NewReader
(pprof) list getPlannedRateChanges
Total: 1.57GB
ROUTINE ======================== github.com/loadimpact/k6/lib/executor.VariableArrivalRateConfig.getPlannedRateChanges in /home/mstoykov/.gvm/pkgsets/go1.12.9/global/src/github.com/loadimpact/k6/lib/executor/var$
able_arrival_rate.go
 -195.48MB  -198.98MB (flat, cum) 12.41% of Total
         .          .    178:   // numbers for scaling, then the instance executing the first segment won't
         .          .    179:   // ever do even a single request, since scale(20%, 1) would be 0, whereas
         .          .    180:   // the rational value for scale(20%, 1/sec) is 0.2/sec, or rather 1/5sec...
         .          .    181:   currentRate := getScaledArrivalRate(segment, varc.StartRate.Int64, timeUnit)
         .          .    182:
   69.15MB    69.15MB    183:   rateChanges := make([]rateChange, 0, varc.calcSize())
         .          .    184:   timeFromStart := time.Duration(0)
         .          .    185:
         .          .    186:   var tArrivalRate = new(big.Rat)
         .          .    187:   var tArrivalRateStep = new(big.Rat)
         .          .    188:   var stepCoef = new(big.Rat)
         .          .    189:   for _, stage := range varc.Stages {
         .          .    190:           stageTargetRate := getScaledArrivalRate(segment, stage.Target.Int64, timeUnit)
         .          .    191:           stageDuration := time.Duration(stage.Duration.Duration)
         .          .    192:
         .        1MB    193:           if currentRate.Cmp(stageTargetRate) == 0 {
         .          .    194:                   // We don't have to do anything but update the time offset
         .          .    195:                   // if the rate wasn't changed in this stage
         .          .    196:                   timeFromStart += stageDuration
         .          .    197:                   continue
/// a lot of nothing 
         .          .    218:           stepNumber := (stageDuration / minIntervalBetweenRateAdjustments)
         .          .    219:           if stepNumber > 1 {
         .        -8B    220:                   rateDiff := new(big.Rat).Sub(stageTargetRate, currentRate)
         .          .    221:                   stepInterval := stageDuration / stepNumber
         .          .    222:                   for t := stepInterval; ; t += stepInterval {
         .          .    223:                           if stageDuration-t < minIntervalBetweenRateAdjustments {
         .          .    224:                                   break
         .          .    225:                           }
         .          .    226:
         .   -16.50MB    227:                           tArrivalRate.Add(
         .          .    228:                                   currentRate,
         .        1MB    229:                                   tArrivalRateStep.Mul(
         .          .    230:                                           rateDiff,
         .        8MB    231:                                           stepCoef.SetFrac64(int64(t), int64(stageDuration)),
         .          .    232:                                   ),
         .          .    233:                           )
         .          .    234:
 -264.62MB  -264.62MB    235:                           rateChanges = append(rateChanges, rateChange{
         .          .    236:                                   timeOffset:   timeFromStart + t,
         .        3MB    237:                                   tickerPeriod: getTickerPeriod(tArrivalRate),
         .          .    238:                           })
         .          .    239:                   }
         .          .    240:           }
         .          .    241:           timeFromStart += stageDuration
         .          .    242:           rateChanges = append(rateChanges, rateChange{

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both benchmarks ran 1000 times so ... all the numbers should be divided by 1000 so ... around 260kb saved by ... probably not having to increase the underlying slice a bunch of times and than throwing it away as it still needs to grow ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, I intentionally used a NullDuration, to make it apparent that the value could be 0 (given that we can't pass that zero value to time.NewTicker() - see the comments in type rateChange struct {}

I saw the comment it's just that NullDuration adds a boolean and in the same vein as this whole thread - this literally means that if we are not using it we will save 33.3% of the memory used for rateChange struct (time.Duraiton is an int64 and null.Duraiton is time.Duration with a bool).
Not a huge win but given that a similar thing can be accomplished with either a pointer to time.Duration or ... just you know knowing that it can't be 0 and checking for it where appropriate ...

require.Equal(t, big.NewRat(3, 1), nilEs.InPlaceScaleRat(threeRat))
require.Equal(t, big.NewRat(3, 1), threeRat)
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all of the tests here should be run with t.Parallel().

Also, one of the tests I planned to add here would have been the ultimate validation of #997. Basically, in a loop of say, 100 times, generate a random number of random fractions with a sum of 1. Then, turn these into execution segments (checking for errors), and use the resulting segments to Scale() another bunch of random integer numbers. Then we check if the sum of the scaled nubmers is equal to the starting number.

Copy link
Member

@na-- na-- Nov 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though maybe we should merge this PR and simply add that test directly in #1007? Or maybe in another PR to #1007 with more tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also wanted to have all the tests (including all the executors) run through list of executionSegments. , maybe not random (although that sounds like a good idea). But decided to go for ... well ... writing some amount of tests for most of the code and hopefully getting some functions in place ready for doing bigger and more complicated tests. Also in the case of the executors the math started to get too hairy for me ;)

I would prefer if we merge this HUGE PR in the other HUGEST PR and than possibly add smaller and more targeted tests ... specifically so that reviewing those changes doesn't take a day ... or three ;)

Copy link
Member

@na-- na-- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the super minor nits, this LGTM.

lib/executor/constant_arrival_rate_test.go Show resolved Hide resolved
core/local/local_test.go Outdated Show resolved Hide resolved
Copy link
Contributor

@imiric imiric left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few questions and minor issues.

lib/execution_segment_test.go Outdated Show resolved Hide resolved
lib/execution_segment_test.go Outdated Show resolved Hide resolved
lib/execution_test.go Outdated Show resolved Hide resolved
lib/execution_test.go Show resolved Hide resolved
lib/executor/constant_arrival_rate_test.go Outdated Show resolved Hide resolved
lib/executor/constant_arrival_rate_test.go Outdated Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Outdated Show resolved Hide resolved
lib/executor/variable_arrival_rate_test.go Outdated Show resolved Hide resolved
This doesn't include a single line that I am not sure it reachable at
all.

More tests can be added with better corner cases in the future
Also fix it to not emit two rate changes when there is 0 duration stage
The results are:
benchmark                            old ns/op     new ns/op     delta
BenchmarkGetPlannedRateChanges-4     4140227       2710011       -34.54%
BenchmarkGetPlannedRateChanges-4     3531805       2397757       -32.11%
BenchmarkGetPlannedRateChanges-4     3337196       2277776       -31.75%
BenchmarkGetPlannedRateChanges-4     2984674       2192676       -26.54%
BenchmarkGetPlannedRateChanges-4     2900849       2492187       -14.09%

benchmark                            old allocs     new allocs   delta
BenchmarkGetPlannedRateChanges-4     72977          56205        -22.98%
BenchmarkGetPlannedRateChanges-4     72977          56205        -22.98%
BenchmarkGetPlannedRateChanges-4     72977          56205        -22.98%
BenchmarkGetPlannedRateChanges-4     72977          56205        -22.98%
BenchmarkGetPlannedRateChanges-4     72977          56205        -22.98%

benchmark                            old bytes     new bytes     delta
BenchmarkGetPlannedRateChanges-4     1503784       1254588       -16.57%
BenchmarkGetPlannedRateChanges-4     1503780       1254585       -16.57%
BenchmarkGetPlannedRateChanges-4     1503780       1254588       -16.57%
BenchmarkGetPlannedRateChanges-4     1503771       1254584       -16.57%
BenchmarkGetPlannedRateChanges-4     1503775       1254588       -16.57%
This also gives some performance improvements:

name                     old time/op    new time/op    delta
GetPlannedRateChanges-4    1.72ms ± 6%    1.66ms ± 7%    ~     (p=0.222 n=5+5)

name                     old alloc/op   new alloc/op   delta
GetPlannedRateChanges-4    1.25MB ± 0%    1.22MB ± 0%  -2.93%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
GetPlannedRateChanges-4     56.2k ± 0%     51.6k ± 0%  -8.16%  (p=0.008 n=5+5)
name                     old time/op    new time/op    delta
GetPlannedRateChanges-4    1.66ms ± 7%    1.34ms ± 2%  -19.42%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
GetPlannedRateChanges-4    1.22MB ± 0%    0.86MB ± 0%  -29.13%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
GetPlannedRateChanges-4     51.6k ± 0%     37.2k ± 0%  -27.86%  (p=0.008 n=5+5)
Copy link
Member

@na-- na-- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

git(hub) gripe: we have to figure out some saner way of changing PRs in response to comments, so that the changes are easy to review... With the current force-pushes you did, both the force-push diff and the "Viewed" file feature in the "Files changed" tab were absolutely useless to see what had actually changed 😞 ...

This needs some more thinking and investigation, but I'd be fine if we had a super-messy git history, including multiple nasty merge commits, until the very last moment, if that would make it easier to review how the code is changed over time. Then, after everyone has given 👍 for the PR contents, the PR author can spend some time and clean up the commits, rebase everything, etc. - I think in that situation the state of "Viewed" files won't reset... maybe...

@mstoykov
Copy link
Contributor Author

I am pretty sure here the problem was that I rebased on top of the new 1007 and there ... well there are way too much changes ;)
So maybe I should've waited for that in particular until the end ...

@mstoykov mstoykov merged commit d77c5dd into new-schedulers Nov 21, 2019
@na-- na-- deleted the pr1007 branch November 21, 2019 08:21
@mstoykov mstoykov modified the milestones: v1.0.0, v0.27.0 Jul 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants