Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

with skaffold dev, the forwarded ports change on the code changes #1815

Closed
jiraguha opened this issue Mar 18, 2019 · 13 comments
Closed

with skaffold dev, the forwarded ports change on the code changes #1815

jiraguha opened this issue Mar 18, 2019 · 13 comments
Labels
area/portforward kind/bug Something isn't working

Comments

@jiraguha
Copy link

Hi,

Adopting skaffold dev for java dev, I am experiencing an issue similar to #1594

Each time I make a change in my java code, all ports that are forwarded are incremented by 1.

Information

  • Skaffold version: v0.25.0
  • Operating system: Mac OS X 10.14.3 x86_64
  • Contents of skaffold.yaml:
apiVersion: skaffold/v1beta7
kind: Config
build:
  artifacts:
    - image: ijp/color-service
      jibGradle: {}

Steps to reproduce the behavior

  • When I run skaffold dev, I have: 8081 -> 8081
  • On the 1st change, I have: 8081 -> 8082
  • On the 2nd change, I have: 8081 -> 8083
  • ...

Br,
JP

@priyawadhwa priyawadhwa added area/portforward kind/bug Something isn't working and removed area/portforward labels Mar 18, 2019
@priyawadhwa
Copy link
Contributor

Hey @jiraguha could you provide more information regarding your project (the contents of your k8s yamls would be helpful), or more specifically are pods recreated upon every code change?

We recently updated the port forward key to include pod name, so if a pod is being recreated upon every code change that would explain why this is happening.

@rio
Copy link

rio commented Mar 21, 2019

I just started using skaffold yesterday and I'm running into the same issue.
Here is my deployment and service as spit out by kustomize, nothing fancy:

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: ...
    app.kubernetes.io/name: ...
  name: ...
spec:
  ports:
  - name: http
    port: 3000
    protocol: TCP
    targetPort: http
  selector:
    app.kubernetes.io/component: ...
    app.kubernetes.io/name: ...
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: ...
    app.kubernetes.io/name: ...
  name: ...
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: ...
      app.kubernetes.io/name: ...
  template:
    metadata:
      labels:
        app.kubernetes.io/component: ...
        app.kubernetes.io/name: ...
    spec:
      containers:
      - envFrom:
        - secretRef:
            name: secret-248g9bh45m
            optional: false
        image: ...
        name: ...
        ports:
        - containerPort: 3000
          name: http

@rio
Copy link

rio commented Mar 21, 2019

For completeness skaffold.yaml:

apiVersion: skaffold/v1beta7
kind: Config
build:
  artifacts:
  - image: ...
deploy:
  kustomize:
    path: kubernetes/

@jiraguha
Copy link
Author

jiraguha commented Mar 25, 2019

Hi @priyawadhwa,

You can find all my project and the k8s here. You will see.. basically, It's something I have generated with kompose.

Yes the pods are recreated:

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
color-app-7f58776499-l48wf       1/1     Running   0          15s
color-mongodb-7db994bff4-57wdz   1/1     Running   0          15s

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
color-app-658fb54ff-v7bvd        1/1     Running   0          6s
color-mongodb-7db994bff4-57wdz   1/1     Running   0          1m

Yes maybe, the pod name should not be in the key as with java everything is rebuilt.

Br,
JP

@lsowen
Copy link

lsowen commented Mar 31, 2019

I was having the same issue. It was because the pod name was changing, because I was using a deployment. I switched to using a pod directly (so that the pod name was static) and now the port forwarding remains static as well.

However, it would be super useful if I could use a Deployment, as it would make the local environment more closely match the production environment.

@Renz2018
Copy link

Renz2018 commented Apr 4, 2019

I was having the same issue.

@botwhytho
Copy link

Seems related to #1594

I can confirm this behavior exists in v0.26.0. I reverted to v.0.23.0 and the problem is indeed fixed on that version. Adding some tests around this would be great so this regression doesn't occur again.

Port forwarding during skaffold dev is a great value add for developers and can't really get more adoption from developer teams if this feature is broken. Too bad I'm having to stay on an older version for this to work at the moment.

@nhartner
Copy link

Also running into this issue and wondering it's being considered a bug that will be fixed

@priyawadhwa
Copy link
Contributor

I just removed the podname from the key in #2047 -- could someone please confirm if this fixes their issue? You can find instructions for installating the latest version of skaffold at HEAD here under Latest bleeding edge binary

@demisx
Copy link
Contributor

demisx commented May 7, 2019

I've just ran into the same issue with v0.28 on MacOS. I checked the latest build and this port changing issue seems to be fixed. 👍

One thing I've noticed with the latest build, though unrelated to this issue, is that my current file sync results in error for me:

Syncing 1 files for dl-org-api.dev:3a15582509b7144222b3685129694697a1ca8b7d37b31397801d86f13013e51e
INFO[0396] Copying files: map[test/request/request-test-helper.ts:/usr/src/dl-org/test/request/request-test-helper.ts] 
to dl-org-api.dev:3a15582509b7144222b3685129694697a1ca8b7d37b31397801d86f13013e51e 
WARN[0397] Skipping deploy due to sync error: copying files: 
Running [kubectl exec dl-org-api-7b87dcbcf6-rqnlq --namespace default -c dl-org-api -i 
-- tar xmf - -C / --no-same-owner]: stdout , 
stderr: error: unable to upgrade connection: container not found ("dl-org-api")
, err: exit status 1: exit status 1 

Looks like it's still referencing the original pod dl-org-api-7b87dcbcf6-rqnlq that is now in "Terminating" status after the update instead of referencing the latest running pod.

@Mavericktx2
Copy link

Mavericktx2 commented May 7, 2019 via email

@demisx
Copy link
Contributor

demisx commented May 7, 2019

Hmm... I'm no longer seeing the sync error reported above. Seems to be working OK after restart.

@priyawadhwa
Copy link
Contributor

Thanks @demisx

Since this should be fixed in the next release, I'm going to go ahead and close this issue. If anyone continues to see any issues feel free to comment here and we can open it up again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/portforward kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants