Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

odo logs --follow should not break if no containers are running #6058

Closed
valaparthvi opened this issue Aug 25, 2022 · 6 comments · Fixed by #6914
Closed

odo logs --follow should not break if no containers are running #6058

valaparthvi opened this issue Aug 25, 2022 · 6 comments · Fixed by #6914
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)).
Milestone

Comments

@valaparthvi
Copy link
Contributor

valaparthvi commented Aug 25, 2022

/kind bug

What versions of software are you using?

Operating System: Fedora 32

Output of odo version: odo v3.0.0-beta3 (66fbfe0bf)

How did you run odo exactly?

  1. mkcd /tmp/10
  2. odo init
  3. odo logs --follow

Actual behavior

$ odo logs --follow
no containers running in the specified mode for the component "devfile-nodejs-deploy" 

Expected behavior

If the component is not running on the cluster, it should wait until odo logs --follow there is some data to print until user exits.

Related to #5716
cc: @feloy

Any logs, error output, etc?

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 25, 2022
@valaparthvi
Copy link
Contributor Author

valaparthvi commented Aug 26, 2022

I think odo logs --follow was implemented the way kubectl logs has been implemented, that's why it fails.
But to enhance it, we could make it function the way odo dev does, as a long running process. In case of error, we can wait until good logs are received or ask the user to hit Ctrl+c and exit.

@kadel kadel added the priority/Medium Nice to have issue. Getting it done before priority changes would be great. label Aug 29, 2022
@valaparthvi valaparthvi self-assigned this Sep 1, 2022
@valaparthvi
Copy link
Contributor Author

Working on it as a part of #5716

@rm3l rm3l added this to odo v3.0.0 Sep 1, 2022
@dharmit
Copy link
Member

dharmit commented Sep 1, 2022

Actual behavior

$ odo logs --follow
no containers running in the specified mode for the component "devfile-nodejs-deploy" 

This behaviour is intentional. A user using odo logs or odo logs --follow wants to see the logs for containers running in both Dev and Deploy modes. But until odo dev or odo deploy are executed, there are no containers running in either of the modes. Hence, the message you see there.

IMO, just because user specified --follow doesn't mean odo should behave smartly and wait till the user executes odo dev or odo deploy in a different terminal.

Expected behavior

If the component is not running on the cluster, it should wait until odo logs --follow there is some data to print until user exits.

IMO, no. That's just trying to fix a problem that doesn't exist, and make odo unnecessarily too smart.

Probably I'm biased because I worked on this. But trying to make odo too smart has always seemed unappealing to me. My bias apart, is this something we discussed as a problem that needs to be changed before starting to work on it as a part of the mentioned issue/PR?

I think odo logs --follow was implemented the way kubectl logs has been implemented, that's why it fails.

odo logs is using the same underlying libs and functions as kubectl logs. Plus, the part "that's why it fails" is subjective. As I said, the behaviour is intentional.

But to enhance it, we could make it function the way odo dev does, as a long running process. In case of error, we can wait until good logs are received or ask the user to hit Ctrl+c and exit.

I'm going to blow the same old trumpet here - why do we want to enhance it? The way I see it, it's not even enhancement, but adding smartness and maintenance overhead. Has any user suggested that this is a flawed behaviour/approach. If yes, I'll shut up. If no, I don't understand why, as engineers, we want to make odo so smart about various things instead of keeping things simple.

Probably my biases are screaming here, but I'm strongly opinionated about consciously keeping things simpler rather than making a tool too smart!

@feloy
Copy link
Contributor

feloy commented Sep 1, 2022

Actual behavior

$ odo logs --follow
no containers running in the specified mode for the component "devfile-nodejs-deploy" 

This behaviour is intentional. A user using odo logs or odo logs --follow wants to see the logs for containers running in both Dev and Deploy modes. But until odo dev or odo deploy are executed, there are no containers running in either of the modes. Hence, the message you see there.

IMO, just because user specified --follow doesn't mean odo should behave smartly and wait till the user executes odo dev or odo deploy in a different terminal.

Expected behavior

If the component is not running on the cluster, it should wait until odo logs --follow there is some data to print until user exits.

IMO, no. That's just trying to fix a problem that doesn't exist, and make odo unnecessarily too smart.

Probably I'm biased because I worked on this. But trying to make odo too smart has always seemed unappealing to me. My bias apart, is this something we discussed as a problem that needs to be changed before starting to work on it as a part of the mentioned issue/PR?

I think odo logs --follow was implemented the way kubectl logs has been implemented, that's why it fails.

odo logs is using the same underlying libs and functions as kubectl logs. Plus, the part "that's why it fails" is subjective. As I said, the behaviour is intentional.

But to enhance it, we could make it function the way odo dev does, as a long running process. In case of error, we can wait until good logs are received or ask the user to hit Ctrl+c and exit.

I'm going to blow the same old trumpet here - why do we want to enhance it? The way I see it, it's not even enhancement, but adding smartness and maintenance overhead. Has any user suggested that this is a flawed behaviour/approach. If yes, I'll shut up. If no, I don't understand why, as engineers, we want to make odo so smart about various things instead of keeping things simple.

Probably my biases are screaming here, but I'm strongly opinionated about consciously keeping things simpler rather than making a tool too smart!

We want this enhancement to be able to use it for odo dev --logs.

The problem we encounter while implementing odo dev --logs is that the logs module works only when the pods are already running.

I can see different solutions to be able to use this module for odo dev --logs:

  • either wait for all pods to be running before to start the logs module
  • or make the logs module reactive and be able to wait and detect new running pods

We have chosen to explore the second solution first as it seems the more robust in an asynchronous environment.

@dharmit
Copy link
Member

dharmit commented Sep 7, 2022

I can see different solutions to be able to use this module for odo dev --logs:

  • either wait for all pods to be running before to start the logs module

  • or make the logs module reactive and be able to wait and detect new running pods

We have chosen to explore the second solution first as it seems the more robust in an asynchronous environment.

OK. I don't have a strong preference for either. Let's do whichever makes sense. But the side effect it's going to have on current behaviour of odo logs --follow should be well communicated, and we should make it a conscious choice. While going through this issue, it didn't seem like current behaviour of odo logs --follow was really an issue in itself, except that we couldn't reuse code for odo dev --logs. Not being able to reuse code is OK.

Anyway, apologies if I seem to have deviated from the topic.

@rm3l rm3l added this to odo Project Sep 29, 2022
@rm3l rm3l added the backlog label Sep 29, 2022
@kadel kadel added priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)). and removed priority/Medium Nice to have issue. Getting it done before priority changes would be great. labels Feb 2, 2023
@kadel
Copy link
Member

kadel commented Feb 2, 2023

bumping priority because this is blocker for #5716

@rm3l rm3l removed the triage/ready label Jun 19, 2023
@feloy feloy self-assigned this Jun 19, 2023
@feloy feloy moved this to In Progress 🚧 in odo Project Jun 19, 2023
@feloy feloy moved this from In Progress 🚧 to In Review 👀 in odo Project Jun 27, 2023
@rm3l rm3l added this to the v3.13.0 🚀 milestone Jun 27, 2023
@github-project-automation github-project-automation bot moved this from In Review 👀 to Done ✅ in odo Project Jun 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)).
Projects
Archived in project
Status: Done
Development

Successfully merging a pull request may close this issue.

5 participants