Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods are terminated and new pods started on initial odo link and every subsequent odo push #3803

Closed
Tracked by #4242
ajm01 opened this issue Aug 20, 2020 · 11 comments
Closed
Tracked by #4242
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/Medium Nice to have issue. Getting it done before priority changes would be great.

Comments

@ajm01
Copy link

ajm01 commented Aug 20, 2020

/kind bug

What versions of software are you using?

Operating System:
Red Hat Enterprise Linux Server 7.6 (Maipo)
Kernel: Linux 3.10.0-1062.12.1.el7.x86_64

Output of odo version:
[root@slobbed-inf jpa]# odo version
odo v1.2.5 (8b6a698)

Server: https://api.slobbed.os.fyre.ibm.com:6443
Kubernetes: v1.16.2

How did you run odo exactly?

odo link ServiceBindingRequest/example-servicebindingrequest

Actual behavior

link is established, pod is terminated and new pod started
every subsequent odo push results in the current pod being terminated and a new pod initialized

Expected behavior

link is established, pod is terminated and new pod started
every subsequent odo push should not cause the current pod to be terminated - current pod should be maintained

Any logs, error output, etc?

I have the output of requesting the pod yaml at each step - pre-link, post-link, post-link-push, post-link-app-
podyamls.zip
update-push

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 20, 2020
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 19, 2020
@dharmit
Copy link
Member

dharmit commented Nov 19, 2020

We're making a bunch of changes (#4159, #4160 and #4242) under the hood of odo service and odo link as far as Operator Hub integration is concerned. I'm marking this as a low priority as a result.

/priority low
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added priority/Low Nice to have issue. It's not immediately on the project roadmap to get it done. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 19, 2020
@scottkurz
Copy link
Contributor

scottkurz commented Feb 2, 2021

On odo v2.0.4, I'm still seeing this. That is, I do:

  1. odo push # initial
  2. odo link # To a service created via this operator: https://operatorhub.io/operator/postgresql-operator-dev4devs-com
  3. odo push # Expect a new pod here
  4. Make trivial source change
  5. odo push # I see a whole new pod, which I shouldn't

I can gather some more logs if it helps, but since @dharmit had said this was a low priority during refactoring I'll wait for him to ask first. Thanks.

@dharmit
Copy link
Member

dharmit commented Feb 3, 2021

5. odo push # I see a whole new pod, which I shouldn't

This is a problem on odo side but, tbh, I have not dug into this because...

I can gather some more logs if it helps, but since @dharmit had said this was a low priority during refactoring I'll wait for him to ask first. Thanks.

We're working on changing the experience with odo link (#4208) and other service commands like odo service create, odo service delete (#4159 & #4160). I expect this bug to get addressed along with #4208.

/remove-priority low
/priority medium

@openshift-ci-robot openshift-ci-robot added priority/Medium Nice to have issue. Getting it done before priority changes would be great. and removed priority/Low Nice to have issue. It's not immediately on the project roadmap to get it done. labels Feb 3, 2021
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2021
@dharmit
Copy link
Member

dharmit commented May 4, 2021

/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2021
@valaparthvi
Copy link
Contributor

valaparthvi commented Jul 5, 2021

@ajm01 @scottkurz are you still facing this problem?
Perhaps this will not be a problem with the new changes to odo link. odo link no longer pushes any change to the server, it only does on odo push.
@feloy?

@feloy
Copy link
Contributor

feloy commented Jul 5, 2021

Perhaps this will not be a problem with the new changes to odo link. odo link no longer pushes any change to the server, it only does on odo push.
@feloy?

I think that this issue is fixed by #4819:

  1. create a component and a service
  2. odo push
  3. odo link service
  4. oso push # the component is restarted, as expected
  5. odo push # the component is not restarted
  6. edit sources files
  7. odo push # the component is not restarted, and files are synced on the running component

@dharmit
Copy link
Member

dharmit commented Jul 6, 2021

Closing this issue.

@ajm01 @scottkurz if you folks are still facing problems, feel free to reopen it.

/close

@openshift-ci openshift-ci bot closed this as completed Jul 6, 2021
@openshift-ci
Copy link

openshift-ci bot commented Jul 6, 2021

@dharmit: Closing this issue.

In response to this:

Closing this issue.

@ajm01 @scottkurz if you folks are still facing problems, feel free to reopen it.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@scottkurz
Copy link
Contributor

scottkurz commented Jul 13, 2021

NM....I said I recreated but I may have run with odo v2.2.0 .. will do the test again on a clean env. and update this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/Medium Nice to have issue. Getting it done before priority changes would be great.
Projects
None yet
Development

No branches or pull requests

7 participants