Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

odo logs too slow for non-local clusters #5872

Closed
dharmit opened this issue Jun 24, 2022 · 4 comments · Fixed by #5973
Closed

odo logs too slow for non-local clusters #5872

dharmit opened this issue Jun 24, 2022 · 4 comments · Fixed by #5973
Assignees
Labels
area/log Issues or PRs related to `odo logs` kind/bug Categorizes issue or PR as related to a bug. priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)).

Comments

@dharmit
Copy link
Member

dharmit commented Jun 24, 2022

What versions of software are you using?

Operating System: all

Output of odo version: main

How did you run odo exactly?

odo logs against a cluster that's not minikube/CRC. For example, one spun up using ClusterBot

Actual behavior

Takes many seconds to show the logs

Expected behavior

Take lesser time

Any logs, error output, etc.?

Originally discussed at #5846 (comment).

/kind bug
/area log

@openshift-ci openshift-ci bot added kind/bug Categorizes issue or PR as related to a bug. area/log Issues or PRs related to `odo logs` labels Jun 24, 2022
@kadel kadel added the priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)). label Jun 24, 2022
@rm3l
Copy link
Member

rm3l commented Jun 28, 2022

Expected behavior

Take lesser time

To ensure we won't have regressions in the future, I think it could be great to include some kind of performance testing for this, with an acceptable threshold for what we mean by "lesser time". Relates to #5830, but for odo logs.

@dharmit
Copy link
Member Author

dharmit commented Jun 29, 2022

Self-note:

  • Look at client-go caching mentioned by Tomas
  • By threshold, Armel referred to how long odo should wait till odo logs exits if it doesn't get a response it's looking for.
  • Check with Dev Sandbox cluster

@dharmit
Copy link
Member Author

dharmit commented Jul 20, 2022

Tasks pending on this issue:

  • Address odo logs --follow issue explained in Grab pod logs concurrently #5942 (comment).
  • In a further PR, modify the code to first fetch the resources most likely to have Pods of interest for odo. This would include Deployments, ReplicaSets and Pods itself. Check other resources after that.
  • Use channels instead of sharing a slice+mutex in the go routines, as is the case here:

    odo/pkg/kclient/all.go

    Lines 29 to 57 in e7588e3

    var mu sync.Mutex
    var wg sync.WaitGroup
    var out []unstructured.Unstructured
    start := time.Now()
    klog.V(2).Infof("starting to query %d APIs in concurrently", len(apis))
    var errResult error
    for _, api := range apis {
    if !api.r.Namespaced {
    klog.V(4).Infof("[query api] api (%s) is non-namespaced, skipping", api.r.Name)
    continue
    }
    wg.Add(1)
    go func(a apiResource) {
    defer wg.Done()
    klog.V(4).Infof("[query api] start: %s", a.GroupVersionResource())
    v, err := queryAPI(client, a, ns, selector)
    if err != nil {
    klog.V(4).Infof("[query api] error querying: %s, error=%v", a.GroupVersionResource(), err)
    errResult = err
    return
    }
    mu.Lock()
    out = append(out, v...)
    mu.Unlock()
    klog.V(4).Infof("[query api] done: %s, found %d apis", a.GroupVersionResource(), len(v))
    }(api)
    }
  • Anything else that we might be able to think of while working on the above.

We have discussed adding tests that would help us track the performance of odo logs so that we can know if it slows down in the future. This is to avoid regression, or at least finding it ourselves before it becomes a problem for the users. I prefer it be done in a separate issue than this.

@dharmit dharmit moved this from In Progress to For review in odo v3-beta2 Aug 2, 2022
Repository owner moved this from For review to Done in odo v3-beta2 Aug 5, 2022
@rm3l
Copy link
Member

rm3l commented Aug 8, 2022

We have discussed adding tests that would help us track the performance of odo logs so that we can know if it slows down in the future. This is to avoid regression, or at least finding it ourselves before it becomes a problem for the users. I prefer it be done in a separate issue than this.

@dharmit Just so we don't forget, I've created this user story to track the performance of odo logs: #6014
Feel free to add any additional info you might have.

@rm3l rm3l added the v3 label Oct 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/log Issues or PRs related to `odo logs` kind/bug Categorizes issue or PR as related to a bug. priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)).
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants