Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

odo 'hangs' after using a cluster that's no longer accessible #4046

Closed
deboer-tim opened this issue Sep 25, 2020 · 5 comments · Fixed by #4307
Closed

odo 'hangs' after using a cluster that's no longer accessible #4046

deboer-tim opened this issue Sep 25, 2020 · 5 comments · Fixed by #4307
Assignees
Labels
area/UX Issues or PRs related to User Experience kind/bug Categorizes issue or PR as related to a bug. priority/Medium Nice to have issue. Getting it done before priority changes would be great.

Comments

@deboer-tim
Copy link

/kind bug

What versions of software are you using?

Operating System:
MacBook Pro with Catalina

Output of odo version:
odo v2.0.0 (6fbb9d9)

How did you run odo exactly?

I was using odo successfully, then installed CRC, used it for a bit, and then stopped the CRC cluster.

Actual behavior

Tried to use odo again and every command (even ones I would have thought are 'local') takes >1min to respond. No errors, but every time you think it's hung and not going to return, then finally does. Even things that should fail fast (e.g. current dir is not a component) hang for a while.

Expected behavior

I realize it's likely something underlying that is still trying to connect to my stopped cluster, but I didn't intentionally connect odo to it, nor is there any indication that's what the problem is: it just looks like odo performance is awful and it's my experience with odo that I associate with the problem.

There should be:

  • a shorter timeout
  • when there is a long delay, there should be some indication that the command is trying to connect (so I don't think it's hung)
  • when there's a timeout, there should be something that tells me how to stop trying to connect to that cluster
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 25, 2020
@girishramnani girishramnani added area/UX Issues or PRs related to User Experience priority/Medium Nice to have issue. Getting it done before priority changes would be great. labels Oct 5, 2020
@adisky adisky self-assigned this Nov 19, 2020
@adisky
Copy link
Contributor

adisky commented Dec 10, 2020

@deboer-tim mentioned odo seems to hang when cluster is unreachable, even for error cases like The current directory does not represent an odo component, I tried considering odo push command as a base case

  • case-1 : odo push, cluster is reachable
    It works fine, no hang

  • case-2: odo push, cluster not reachable, no local config data/devfile present in the folder
    odo hangs for quite a while and gives error The current directory does not represent an odo component. Use 'odo create' to create component here or switch to directory with a component

Ideally in this case odo should return instantly with error without hang or delay.

  • case-3: odo push, cluster not reachable, devfile.yaml/env.yaml present in the folder

In this case, a little delay is justified as odo is trying to reach cluster, but when i run with verbose enabled before trying hitting openshift API's it hangs for a while.

In a nutshell, odo hangs for a while when cluster in not reachable. I found out the reason is while initializing the commands few command initialization calls to IsCSVSupported, which tries to connect with cluster and odo seems to be hanged.

@adisky
Copy link
Contributor

adisky commented Dec 10, 2020

Around 12 secs delay i got on my system with IsCSVSupported call

[adisky@localhost odo]$ odo push --v 5
In IsCSVSupported  2020-12-10 15:11:31.759876225 +0530 IST m=+0.043293748
In IsCSVSupported  2020-12-10 15:11:34.817076826 +0530 IST m=+3.100494440
In IsCSVSupported  2020-12-10 15:11:37.889947183 +0530 IST m=+6.173364822
In IsCSVSupported  2020-12-10 15:11:40.961679592 +0530 IST m=+9.245097191
In IsCSVSupported  2020-12-10 15:11:44.034027999 +0530 IST m=+12.317445615
I1210 15:11:47.122253     819 util.go:432] path devfile.yaml doesn't exist, skipping it
I1210 15:11:47.122309     819 util.go:737] HTTPGetRequest: https://raw.githubusercontent.com/openshift/odo/master/build/VERSION
I1210 15:11:47.122541     819 util.go:758] Response will be cached in /tmp/odohttpcache for 1h0m0s
 ✗  The current directory does not represent an odo component. Use 'odo create' to create component here or switch to directory with a component

@adisky
Copy link
Contributor

adisky commented Dec 10, 2020

odo link, unlink and odo service commands using IsCSVSupported during command initialization, it is easier to move for link/unlink command but with odo service commands we are using to differentiate help and flags.
https://github.com/openshift/odo/blob/master/pkg/odo/cli/service/create.go#L524

@dharmit
Copy link
Member

dharmit commented Dec 10, 2020

odo link, unlink and odo service commands using IsCSVSupported during command initialization, it is easier to move for link/unlink command but with odo service commands we are using to differentiate help and flags.
https://github.com/openshift/odo/blob/master/pkg/odo/cli/service/create.go#L524

This sucks. And it's a piece that I added. 😞

We faced a similar issue earlier #3825 (comment). This check for CSV support is my mistake. We were earlier working with the understanding that if the user is working on a cluster that supports Operators, we will not show anything related to Service Catalog in the help output. That assumption is no longer valid, and we have to fix this.

@adisky
Copy link
Contributor

adisky commented Dec 10, 2020

We decided to remove IsCSVSupported call during command initialization, we will update the help message with all the options available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/UX Issues or PRs related to User Experience kind/bug Categorizes issue or PR as related to a bug. priority/Medium Nice to have issue. Getting it done before priority changes would be great.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants