-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allows to combine the -f
and -l
flags in kubectl logs
#67573
Allows to combine the -f
and -l
flags in kubectl logs
#67573
Conversation
/ok-to-test |
/assign @liggitt |
pkg/kubectl/cmd/logs.go
Outdated
} | ||
go func(request *rest.Request) { | ||
if err := o.ConsumeRequestFn(request, pipew); err != nil { | ||
pipew.CloseWithError(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this closes all streams on first error. is that what we want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this is the closest to what we currently have. At the moment, it's possible to get multiple log sources using a command like kubectl logs -l app=logging_test --all-containers
, but this code will fail on the first error:
kubernetes/pkg/kubectl/cmd/logs.go
Lines 296 to 300 in 30e4f52
for _, request := range requests { | |
if err := o.ConsumeRequestFn(request, o.Out); err != nil { | |
return err | |
} | |
} |
So, I think, the concurrent version should behave in a similar way: fail on the first error. At least for now. We can do something like retries as a separate improvements, I think. But I'm open for suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liggitt, thanks for the review. I've updated the PR with changes (see the fixup commits since your review) and replied to your comments.
Please take another look.
pkg/kubectl/cmd/logs.go
Outdated
} | ||
go func(request *rest.Request) { | ||
if err := o.ConsumeRequestFn(request, pipew); err != nil { | ||
pipew.CloseWithError(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this is the closest to what we currently have. At the moment, it's possible to get multiple log sources using a command like kubectl logs -l app=logging_test --all-containers
, but this code will fail on the first error:
kubernetes/pkg/kubectl/cmd/logs.go
Lines 296 to 300 in 30e4f52
for _, request := range requests { | |
if err := o.ConsumeRequestFn(request, o.Out); err != nil { | |
return err | |
} | |
} |
So, I think, the concurrent version should behave in a similar way: fail on the first error. At least for now. We can do something like retries as a separate improvements, I think. But I'm open for suggestions.
@liggitt tests are passing now. Could you, please, take another look after the weekend? Changes since your previous review: ba7cb09bcc36c1eb6acf398e9c65d4afefd1df40...e5549129c0db54659711e71317b3b3ed480ae65d In these commits:
I'll squash commits before merging this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment on error detection/handling on failed writes.
I'm still on the fence whether we want to support fan-out of long-running requests like this. @kubernetes/sig-cli-pr-reviews @kubernetes/sig-scalability-pr-reviews, any thoughts on that?
/retest |
Hi @liggitt, is there anything I can do to move this forward? I'm open for further feedback and ready to discuss alternative approaches to implement this. |
I'd recommend raising the question with the CLI and scalability teams (either in slack, or on their mailing lists, or in one of their sig meetings) and see what feedback you get |
@m1kola please put this on the agenda for the next sig-cli meeting (1 week from Wednesday.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss in depth this on the next sig meeting on Oct 10th.
Hi. Yesterday, after the sig-cli meeting I:
We also discussed the need to add the |
/retest |
@soltysh and @seans3 I think, I've addressed the feedback you gave during the meeting. Could you, please, take another look at this PR and let me know, if you have any further comments. I'm now looking into prefixing log lines with source (pod & container name) as we discussed, but it will be a separate PR to keep things simpler for reviewers and for me. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left you some comments
pkg/kubectl/cmd/logs/logs.go
Outdated
@@ -151,6 +158,7 @@ func NewCmdLogs(f cmdutil.Factory, streams genericclioptions.IOStreams) *cobra.C | |||
cmd.Flags().StringVarP(&o.Container, "container", "c", o.Container, "Print the logs of this container") | |||
cmdutil.AddPodRunningTimeoutFlag(cmd, defaultPodLogsTimeout) | |||
cmd.Flags().StringVarP(&o.Selector, "selector", "l", o.Selector, "Selector (label query) to filter on.") | |||
cmd.Flags().IntVar(&o.MaxFollowConcurency, "max-follow-concurency", o.MaxFollowConcurency, "Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd propose naming this max-requests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maximum number of parallel log requests when using selector. Defaults to 5.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could settle for a name in the middle, maybe something like max-log-requests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soltysh @juanvallejo I'm not sure that about max-log-requests
: sound too vague to me. When you do something like kubectl logs deployment/some-name
, it will basically create N requests, but kubectl will be reading them sequentially. EDIT: Fixed call example. Also clarification: N is a number of sources from the deployment.
Probably it's ok for end-users, because they don't care about the number of sequentially requests, right?
pkg/kubectl/cmd/logs/logs.go
Outdated
if o.Follow && len(requests) > 1 { | ||
if len(requests) > o.MaxFollowConcurency { | ||
return fmt.Errorf( | ||
"you are attempting to follow %d log streams, but maximum allowed concurency is %d. Use --max-follow-concurency to increase the limit", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update when renaming the flag and use comma instead of a dot in that sentence.
pkg/kubectl/cmd/logs/logs.go
Outdated
} | ||
|
||
func (o LogsOptions) concurrentConsumeRequest(requests []rest.ResponseWrapper) error { | ||
piper, pipew := io.Pipe() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reader
and writer
are better variable names
pkg/kubectl/cmd/logs/logs.go
Outdated
return o.sequentialConsumeRequest(requests) | ||
} | ||
|
||
func (o LogsOptions) concurrentConsumeRequest(requests []rest.ResponseWrapper) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parallelConsumeRequest
}(request) | ||
} | ||
|
||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you want to block the main flow until you hear back from the wait group, iow. this code should not be part of any goroutine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
io.Copy
below blocks the main flow. If we do not close pipe writer (pipew
currently), it will be blocking the main flow forever (even if the server closed the connection). So we are waiting for all requests to finish in a separate goroutine and close the writer (as a result io.Copy
stops blocking the main flow).
Or do I miss something?
return err | ||
} | ||
|
||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if err != nil && err != io.EOF {
return err
}
return nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soltysh return nil
will stop the for
loop after the first iteration. We want to return from the function when the first error appears, but we don't treat io.EOF
as an error (because it is what we are waiting for). In case of no error we want to continue the loop.
/test pull-kubernetes-integration |
Which makes possible to combile the `-f` and `-l` flags in kubectl logs
/retest |
@soltysh please, take another look. I addressed your feedback in the latest changes and in comments. And thanks for the review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
@m1kola thanks for your patience 👍
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: m1kola, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Test failures seem to be unrelated so let's try to rerun them. |
What this PR does / why we need it:
This PR allows to combine the
-f
and-l
flags in thekubectl logs
by reading logs from multiple sources simultaneously. It is a part of what was requested in #52218 (without the part about-c
).Example:
Release note:
/sig cli
/kind feature