Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using host network, the pod IP is always empty in downward API #24576

Closed
yifan-gu opened this issue Apr 21, 2016 · 18 comments · Fixed by #24633
Closed

When using host network, the pod IP is always empty in downward API #24576

yifan-gu opened this issue Apr 21, 2016 · 18 comments · Fixed by #24633
Labels
area/downward-api lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@yifan-gu
Copy link
Contributor

yifan-gu commented Apr 21, 2016

Turns our that when running a container in host network, its podStatus *kubecontainer.PodStatus that is passed to dockertools.SyncPod() will always contain empty IP, as a result, the PodIP environment becomes empty.

Maybe we want to use the apiPodStatus instead of podStatus in this case to determine the pod IP ? Since the apiPodStatus has the valid PodIP, which is populated by kubelet when the container runs in host network.

Or just populate the podIP with hostIP in dockertools.syncPod when the container runs on host network.

Ref #22666

cc @aaronlevy @kubernetes/sig-node @pmorie

@yifan-gu yifan-gu added kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. labels Apr 21, 2016
@Random-Liu
Copy link
Member

Random-Liu commented Apr 21, 2016

Why not just set podStatus.IP the same with apiPodStatus.IP before SyncPod? Ideally, we want to remove apiPodStatus from SyncPod() in the future if possible, it may be better to get it from podStatus.

@yujuhong WDYT?

@yujuhong
Copy link
Contributor

Why not just set the apiPodStatus.IP to podStatus.IP before SyncPod?

Do you mean "set the podStatus.IP to apiPodStatus.IP before SyncPod?"

IMO, we should call the block to determine the IP only once. Why don't we populate set the kubecontainer.PodStatus.IP before calling generateAPIPodStatus?

@Random-Liu
Copy link
Member

Random-Liu commented Apr 21, 2016

IMO, we should call the block to determine the IP only once. Why don't we populate set the kubecontainer.PodStatus.IP before calling generateAPIPodStatus?

We also use generateAPIPodStatus here. However, because we don't support host network bandwidth limit, it is fine to leave the IP unset here.

However, I think it would be good to have IP always correctly set in generateAPIPodStatus.

@dchen1107 dchen1107 added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 22, 2016
@pmorie
Copy link
Member

pmorie commented Apr 26, 2016

I must admit, I have been thinking about this in the background since @yifan-gu created this issue, and the discussion hasn't added up for me. I finally sat down and spent some time getting my numerous fixes for podIP via downward API back into my brain, and now the problem seems very obvious to me.

So, after some thought, I believe that for the docker runtime, the problem is that GetPodStatus, which is responsible for generating the kubecontainer.PodStatus which is used to supply the pod IP, doesn't check to see whether the host network is being used and return the correct IP address if so. However, generateAPIPodStatus does, which is why the obvious fix seems to be to use the value from this method. I think the right thing to do instead is to change the following call stack to correctly determine the IP when the host network is being used.

DockerManager.GetPodStatus ->
  DockerManager.inspectContainer ->
    DockerManager.determineContainerIP

@Random-Liu @yifan-gu @yujuhong

@yifan-gu
Copy link
Contributor Author

@pmorie Today GetPodStatus doesn't pass the pod spec, so it cannot know if the pod is running on host. This could be solved if we serialize the pod spec into docker labels.

BTW, I think we have a similar issue here for rkt.

@Random-Liu
Copy link
Member

@pmorie As @yifan-gu said, GetPodStatus doesn't have full pod spec today. And I don't think docker manager has knowledge about the host ip now.

@pmorie
Copy link
Member

pmorie commented Apr 27, 2016

@Random-Liu @yifan-gu I think we should think about adding the plbuming. I have not had a good experience storingthis information in multiple places :)

@yujuhong
Copy link
Contributor

yujuhong commented May 2, 2016

The code was written so that it'd work in the absence of the pod specs (which we don't always have).

@Random-Liu @yifan-gu I think we should think about adding the plbuming. I have not had a good experience storingthis information in multiple places :)

I agree with you, but adding the dependency today without some form of checkpointing (e.g., container annotations) of the pod specs may be problematic now. We can revisit this after we store pod specs in the annotations. What do you think? :)

@pmorie
Copy link
Member

pmorie commented May 5, 2016

I'm cool with revisiting @yujuhong but if we do that we should drop this to p2

k8s-github-robot pushed a commit that referenced this issue May 16, 2016
Automatic merge from submit-queue

Fix downward api for pod using host network.

Fixes #24576

@yujuhong @yifan-gu 
I still can't find a cleaner way to fix this, so send this for discussion.

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24633)
<!-- Reviewable:end -->
@yifan-gu yifan-gu added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels May 16, 2016
@yifan-gu
Copy link
Contributor Author

Reopen the issue to capture #24633 (comment)

@xiang90
Copy link
Contributor

xiang90 commented Nov 15, 2016

@yifan-gu Is this fixed?

@bassam
Copy link

bassam commented Jan 3, 2017

hi, also interested in knowing if this is fixed.

@yifan-gu
Copy link
Contributor Author

yifan-gu commented Jan 3, 2017

@bassam @xiang90 I don't think anyone is tacking on this one right now since people are mostly working on the new CRI implementation for different runtimes. cc @Random-Liu @yujuhong

@bassam
Copy link

bassam commented Jan 3, 2017

@yifan-gu thanks. it also looks like there might be a specific issue with rkt here. Looking at https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/rkt/rkt.go#L804 and
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/rkt/rkt.go#L1304, it looks like rkt does not handle the case of status.PodIP when using the host network.

bassam added a commit to bassam/bootkube that referenced this issue Jan 3, 2017
@yifan-gu
Copy link
Contributor Author

yifan-gu commented Jan 3, 2017

@bassam Thanks for the heads up. I will take a look to see if we can quickly work around this.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 18, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

openshift-publish-robot pushed a commit to openshift/kubernetes that referenced this issue Feb 28, 2020
…herry-pick-24564-to-release-4.4

[release-4.4] Bug 1804514: kubelet: Make default journal format precise

Origin-commit: cc614cecc12ec55e2c111d66daa2c130346c9053
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/downward-api lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants