Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't panic when an execution segment is too small #1295

Closed
na-- opened this issue Jan 7, 2020 · 7 comments
Closed

Don't panic when an execution segment is too small #1295

na-- opened this issue Jan 7, 2020 · 7 comments

Comments

@na--
Copy link
Member

na-- commented Jan 7, 2020

Running something like this: k6 run --execution-segment=2/4:3/4 -u 3 -i 3 github.com/loadimpact/k6/samples/http_get.js

Would cause the following panic:

  execution: (25.00%) 1 executors, 0 max VUs, 0s max duration (incl. graceful stop):
           * default: 0 iterations shared among 0 VUs (maxDuration: 10m0s, gracefulStop: 30s)

panic: strings: negative Repeat count

goroutine 21 [running]:
strings.Repeat(0xebf494, 0x1, 0x8000000000000026, 0xc000860750, 0x22)
	strings/strings.go:533 +0x5aa
github.com/loadimpact/k6/ui/pb.(*ProgressBar).String(0xc000f98c40, 0x0, 0x0)
	github.com/loadimpact/k6/ui/pb/progressbar.go:138 +0x370
github.com/loadimpact/k6/cmd.renderMultipleBars(0xc000f90001, 0xc00286e000, 0x2, 0x2, 0xc0004bf190, 0xc0004bf118)
	github.com/loadimpact/k6/cmd/ui.go:91 +0xf4
github.com/loadimpact/k6/cmd.showProgress.func2(0x10efb40, 0xc000f98c00, 0xc00286e000, 0x2, 0x2, 0xc001092000, 0xc00286e0c0)
	github.com/loadimpact/k6/cmd/ui.go:138 +0xf7
github.com/loadimpact/k6/cmd.showProgress(0x10efb40, 0xc000f98c00, 0x0, 0x3, 0x1, 0x0, 0x0, 0x3, 0x1, 0x0, ...)
	github.com/loadimpact/k6/cmd/ui.go:156 +0x46c
github.com/loadimpact/k6/cmd.glob..func11.1(0x10efb40, 0xc000f98c00, 0xc00249ee00, 0xc00271b400, 0xc002819710)
	github.com/loadimpact/k6/cmd/run.go:161 +0xa9
created by github.com/loadimpact/k6/cmd.glob..func11
	github.com/loadimpact/k6/cmd/run.go:160 +0xc95

As it's obvious by the description, we have an empty segment (0 max VUs, 0 iterations). So we should have a check somewhere that would prevent things like that - if a particular executor doesn't have any work to do with a certain execution segment, we shouldn't bother with it for the run.

It's important to note though, that when there is a small execution segment and are multiple executors, some may be without work, but others may still have to be run. For example:

export let options = {
    executionSegment: "2/4:3/4",
    execution: {
        shared_iters1: {
            type: "shared-iterations",
            vus: 3,
            iterations: 3,
        },
        shared_iters2: {
            type: "shared-iterations",
            vus: 4,
            iterations: 4,
        },
        constant_arr_rate: {
            type: "constant-arrival-rate",
            rate: 3,
            timeUnit: "1s",
            duration: "20s",
            preAllocatedVUs: 4,
            maxVUs: 4,
        },
    },
};

The first executor can be safely ignored, but the other two have work to do.

@na-- na-- changed the title Panic when execution segment is too small Don't panic when an execution segment is too small Jan 7, 2020
@cuonglm cuonglm self-assigned this Jan 9, 2020
cuonglm added a commit that referenced this issue Jan 9, 2020
For an empty segment, we currently do nothing. But the goroutine spawned
to track progress bar still run, with and invalid progress information
passed to ProgressBar.

It causes the calculation in ProgressBar.String becomes invalid due to
overflow.

To fix this, we only do calculation when we have a valid filled value.

Fixes #1295
@cuonglm
Copy link
Contributor

cuonglm commented Jan 9, 2020

@mstoykov https://github.com/loadimpact/k6/pull/1304/files#r364664374

I am not sure what's masking here, it just have the same functional as before. The propose to return a message for ProgressBar.String is weird. Should a progress bar return a message without the progress bar? If you concern about the empty progress bar, we can add a else if filled < 0, we make the progress bar full, show it will look something like:

shared_iters1 [======================================] 0/0 shared iters among 0 VUs done!
shared_iters2 [======================================] 1/1 shared iters among 1 VUs done!

@cuonglm
Copy link
Contributor

cuonglm commented Jan 9, 2020

@mstoykov My concern for adding a validation to not run (well, we actually not run anything) if segment is empty:

  • It add complexity to current enough complexity code of each executor without any benefit, the executors won't run anything if segment is empty now.

    • The segment does not now it's empty or not, it needs the input from executor.
    • The segment info, numVus, numIters are only known inside Executor.Run, so you can't generalize the logic, even you can add a helper function or adding a bare check if numVus == 0 || numIters == 0 is not scale and repeat among executors. We current have 7 executors (will it be more in the future?).
  • The panics in this issue belongs to the pb/ui package. Even we add validation for executors, then one can still pass illegal progressFn to ProgressBar, and cause the panics happen again.

So instead, we should fix the actually overflow in current ui/pb package. It's the same behavior if we don't see the panic.

@na-- @imiric how is your thought?

@na--
Copy link
Member Author

na-- commented Jan 9, 2020

@cuonglm, your current PR, with the example executor config I gave in this issue, produces something like this:

  execution: (25.00%) 3 executors, 2 max VUs, 10m30s max duration (incl. graceful stop):
           * constant_arr_rate: 0.75 iterations/s for 20s (maxVUs: 1, gracefulStop: 30s)
           * shared_iters1: 0 iterations shared among 0 VUs (maxDuration: 10m0s, gracefulStop: 30s)
           * shared_iters2: 1 iterations shared among 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)

running (00m07.7s), 1/2 VUs, 6 complete and 0 interrupted iterations
constant_arr_rate [=============>------------------------] 0.75 iters/s, 0 out of 1 VUs active
shared_iters1 [] 0/0 shared iters among 0 VUs done!
shared_iters2 [======================================] 1/1 shared iters among 1 VUs done!

This is obviously not OK... As I mentioned in the issue:

if a particular executor doesn't have any work to do with a certain execution segment, we shouldn't bother with it for the run

For this run, there's no need to show any progressbar for shared_iters1, since there's no work for that executor to do. So we either need to hide that row, or show some explanation instead of a progressbar in it...

And we should still mention it in the list of executors on top, so it's clear that k6 still knows about that execution, it just doesn't have any work to do for this particular run, with this particular execution segment. And if it's not clear enough from shared_iters1: 0 iterations shared among 0 VUs (maxDuration: 10m0s, gracefulStop: 30s) that it doesn't have any work to do, we can always add some extra explanation, like [disabled] in front of it, or something like that.

Regarding how this should be implemented... Maybe we need a new method in the ExecutorConfig interface that returns a boolean if the executor has any work to do for a given ExecutionSegment?

The current method GetExecutionRequirements() would almost be enough, since if an executor doesn't need any VUs, then it can't do any work... But the externally-controlled executor messes that logic, since it can initialize its own VUs... So, a separate method seems a reasonable solution.

@cuonglm
Copy link
Contributor

cuonglm commented Jan 9, 2020

@na-- So will we fix bug in ui/pb?

Regarding how this should be implemented... Maybe we need a new method in the ExecutorConfig interface that returns a boolean if the executor has any work to do for a given ExecutionSegment?

Let do that.

@cuonglm
Copy link
Contributor

cuonglm commented Jan 9, 2020

@na-- Seems it's better to add method to Executor instead of ExecutorConfig. In https://github.com/loadimpact/k6/blob/f6be35cfa006c56954d6821e0f94f4863f063e45/core/local/local.go#L292

We make progress bar print started, but we can't know whether executor has works there.

@na--
Copy link
Member Author

na-- commented Jan 9, 2020

I don't understand your point, sorry. If we add this method, let's call it HasWork(*ExecutionSegment), to ExecutorConfig, then we don't even need to make an Executor for the ones that return false...

cuonglm added a commit that referenced this issue Jan 10, 2020
For executor which has nothing to do in a given segment, we should not
include it in the list of executors.

To do it, add new method HasWork to ExecutorConfig. By filtering out no
work executor when creating new schedulter, we can prevents any un-necessary
works and provide better UX.

Fixes #1295
cuonglm added a commit that referenced this issue Jan 10, 2020
For executor which has nothing to do in a given segment, we should not
include it in the list of executors.

To do it, add new method HasWork to ExecutorConfig. By filtering out no
work executor when creating new schedulter, we can prevents any un-necessary
works and provide better UX.

Fixes #1295
cuonglm added a commit that referenced this issue Jan 10, 2020
For executor which has nothing to do in a given segment, we should not
include it in the list of executors.

To do it, add new method HasWork to ExecutorConfig. By filtering out no
work executor when creating new schedulter, we can prevents any un-necessary
works and provide better UX.

Fixes #1295
@cuonglm cuonglm removed their assignment Jan 11, 2020
na-- pushed a commit that referenced this issue Jan 14, 2020
This fixes #1295 by not initializing any of the executors that would have no work for the particular execution segment used in the run.
@na--
Copy link
Member Author

na-- commented Jan 14, 2020

Closed by #1307 ... seems like the keywords don't apply when we're not merging to master

@na-- na-- closed this as completed Jan 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants