-
Notifications
You must be signed in to change notification settings - Fork 569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consistent Logging #3152
Comments
@pydctw: This issue is currently awaiting triage. If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Oh nice! I have been working on a POC to replace that stuff. It's a bit difficult to find out though which was supposed to do what. And it will be a rather large change I'm afraid. |
@Skarlso, I would love to hear what you are thinking. Let's sync up. |
@pydctw Sure, will write up a detailed POC and link a WIP PR tomorrow. :) |
We should also consider adding JSON support during this refactoring, which is being discussed in CAPI: |
@sedefsavas I would rather do a single thing / PR. This will be a bit of a pain as is anyways. :D I worked out a clean API for logging but I have to replace a LOT of log lines. :))))) We can do it in a followup PR? :) |
Hmm, I read the whole thing. I guess I can use a different logger, like zerolog? |
Thanks for making progress on this @Skarlso Since we are doing refactoring, it might be a good time to think about having consistency with cluster-api so that users won't have to grep differently to collect logs from different managers in the same management cluster. It might worth further discussing this including other providers to see if a common approach could be adopted instead of having a custom logic here. |
For sure, I'm open for suggestions as to which logger to adopt. :) I can use whatever we want as long as it's consistent with everything else, I guess. |
To be clear, the reason for the clear interface is, of course, abstraction. I think it's sane to keep some kind of common interface and use whatever we want in the background. I would like to avoid having to change the logger again in so many places if that's okay. We can change the implementation underneath it easily if we keep a sane common interface. |
Hey, So we don't really have our own log interface/implementation in core CAPI like CAPA has (as far as I know). Core CAPI currently uses klog as underlying implementation of that interface (here we set the klogr as controller-runtime logger). We are currently thinking about changing that underlying implementation to component-base/logs (which is used in kube-apiserver, kube-scheduler, ...). The effect of that is that as underlying implementation (behind logr.Logger) we would still be logging via klog, but in case the JSON logging format is enabled we are using the JSON logger implementation from component-base/logs. We are also inheriting the log flags from component-base/logs. I think the best summary from a core CAPI perspective is this comment: kubernetes-sigs/cluster-api#5571 (comment) To be honest, I don't know the reason why CAPA currently has an own log interface or the requirements why that is (probably) necessary. |
@sbueringer Hey Stefan. We didn't have one until now. :) CAPA is also using klog and context logger and logr. However, it's not very user friendly. It's difficult to grep for keywords, and the notion of doing this: I was in the process of making a unified interface which is still using logr in the background, but at least from logging perspective, we use things like log.Debug, log.Info, log.Warning, and then just add a grep-able key value pair to it. |
Ah, got your point. So it's about having a thin layer on top of go-logr which essentially provides the usual info/debug/... log levels instead of V(i) (in the code as well as an additional "level": "x", k/v pair). I think I need more time to think this over :). We were kind of happy that we potentially get rid of our own thin layer on top of go-logr we've used in the ClusterClass implementation. P.S. I think this is orthogonal to replacing the logger "below" the interface, but I think it's relevant regarding some other log improvements we're thinking about to make in core CAPI. |
I don't have strong opinions about logging interfaces, because all of them have both pros and cons. |
Regarding this bit:
I'm struggling to see the benefit of this. :) You trade readability with what exactly? Having switches like And I've read the other point why klog disagrees with Dave, but I don't actually like that direction because it's not very human friendly, I believe. In any case. I bow my head to conform, but I feel like I had to share my opinion about the matter and why I wrote a wrapper interface in the first place. |
@Skarlso I personally appreciate and I'm truly thankful for your opinion, and I don't have problems in saying that there are a couple of points I agree with. My intent above was just to provide some context on how we are tackling this problem in CAPI. The final call here is clearly up to the CAPA community, not to me. |
@Skarslo I have some concerns regarding diverging from the cluster-api ecosystem. But I think it'd be worth to discuss this during our community meeting on Feb 21st at 9 am PT. |
Thanks all for chiming in and sharing your opinions on this. |
Cool, and thanks to all! 😊👍 |
I'm with Fabrizio on that one. I also see some benefit in other log levels, but I prefer to stay consistent with the Kubernetes ecosystem. We also have to take into account that we are using a lot of upstream libraries like controller-runtime and client-go. They are all logging with the But I also agree with Fabrizio, it's a good discussion to have, it's just my opinion and I think it's something that the CAPA community should decide. |
@Skarlso, we wanted to discuss this issue during yesterday's CAPA office hour but since you were not there, following up on the issue. This is my understanding reading comments above but to confirm - you are ok for CAPA to follow k8s ecosystem's logging convention of using V(0)-V(10) messages instead of a logging interface from the PoC? |
Thanks, sorry, I was out on vacation. :) will take a look. |
A related issue: |
@sedefsavas Okay, so... I'm okay with following this convention, but that will result in a not very searchable log entry which is what the initial problem also concluded. So I will have to wrap it in something that sets up searchable entries anyways. :) |
And even if we follow the verbosity levels, we still need guidance and constants in the code that help people know which verbosity level they should use for their situation. |
Absolutely. |
@Skarlso - will ping you on slack. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten Okay, so now that we conform more with CAPI and use proper logging formats and initiation, I think we can refactor some internal things and have a unified logging interface. Whether that's just a bunch of const (
logLevelInfo = iota
logLevelError
logLevelWarning
logLevelDebug
logLevelTrace
) or a complete interface like using |
I like the idea of |
I second that. |
/assign |
I've found that the use of log format and logging levels are inconsistent in the repo.
A few examples
Deleted association between route table and subnet
while using V(2) forDeleted security group
is not clear to me.See #3143.
What do you expect?
Related Issues
The text was updated successfully, but these errors were encountered: