Skip to content
This repository has been archived by the owner on Feb 24, 2020. It is now read-only.

stage1: Temporary fix for symlink mount issue. #2290

Merged
merged 1 commit into from
Mar 18, 2016

Conversation

yifan-gu
Copy link
Contributor

nspawn doesn't handle a mount operation correctly if the container path contains a symlink.
This tries to fix the problem in rkt by evaluating the mount path and resolve any symlinks before passing the mount path to nspawn.

cc @alban @iaguis @jonboulle @sjpotter

@jonboulle
Copy link
Contributor

temporal -> temporary

@yifan-gu yifan-gu changed the title stage1: Temporal fix for symlink mount issue. stage1: Temporary fix for symlink mount issue. Mar 16, 2016
@@ -565,6 +565,50 @@ func PodToSystemd(p *stage1commontypes.Pod, interactive bool, flavor string, pri
return nil
}

// evaluateAppMountPath tries to resolve symlinks within the path.
// It returns the actual relative path for the given path.
// TODO(yifan): This is a temporal fix for systemd-nspawn not handling symlink mounts well.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

temporal -> temporary

@yifan-gu yifan-gu force-pushed the fix_symlink branch 2 times, most recently from 5010f43 to ce5d7d0 Compare March 16, 2016 20:35
@@ -565,6 +565,54 @@ func PodToSystemd(p *stage1commontypes.Pod, interactive bool, flavor string, pri
return nil
}

// evaluateAppMountPath tries to resolve symlinks within the path.
// It returns the actual relative path for the given path.
// TODO(yifan): This is a temporary fix for systemd-nspawn not handling symlink mounts well.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an upstream systemd issue we can reference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet at this moment? @iaguis @alban

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alban @jonboulle Added the reference to the issue.

appRootfs := common.AppRootfsPath(absRoot, appName)
mntPath, err := evaluateAppMountPath(appRootfs, m.Path)
if err != nil {
return nil, fmt.Errorf("could not evaluate path %v: %v", m.Path, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

errwrap?

@iaguis
Copy link
Member

iaguis commented Mar 18, 2016

How about we fix the errwrap and test issue and we leave the rest for later?

@alban
Copy link
Member

alban commented Mar 18, 2016

How about we fix the errwrap and test issue and we leave the rest for later?

That's ok for me.

@iaguis
Copy link
Member

iaguis commented Mar 18, 2016

OK. The rest LGTM so let's merge this.

iaguis added a commit that referenced this pull request Mar 18, 2016
stage1: Temporary fix for symlink mount issue.
@iaguis iaguis merged commit 985606e into rkt:master Mar 18, 2016
@iaguis
Copy link
Member

iaguis commented Mar 18, 2016

Follow-up: #2298

@yifan-gu yifan-gu deleted the fix_symlink branch March 18, 2016 18:23
@sjpotter
Copy link
Contributor

sjpotter commented Apr 5, 2016

This is an issue again.

on my host

# ls /var/run/secrets/kubernetes.io/serviceaccount/
#

in the container

spotter-rkt-minion-8m45 spotter # rkt enter --app=l7-lb-controller 1f1457dd
enter: no command specified, assuming "/bin/bash"
groups: cannot find name for group ID 11
root@rkt-1f1457dd-9ced-4b47-ab88-e938c5937ffd:/# ls /var/run/secrets/kubernetes.io/serviceaccount/
ls: cannot access /var/run/secrets/kubernetes.io/serviceaccount/: No such file or directory

investigating what happened

@sjpotter
Copy link
Contributor

sjpotter commented Apr 5, 2016

yea, something in 1.3.0 (possibly docker volume support?) broke this as yifan's 1.2.1 build with his fix works fine, but 1.3.0 pulled off github does not

@alban
Copy link
Member

alban commented Apr 6, 2016

@sjpotter what rkt run command did you use and what is the image manifest (rkt image cat-manifest)?

@sjpotter
Copy link
Contributor

sjpotter commented Apr 6, 2016

{
    "acVersion": "0.7.4+git",
    "acKind": "PodManifest",
    "apps": [{
        "name": "default-http-backend",
        "image": {
            "id": "sha512-d8bdd604d73976278dd46436602388d6edde3ceb6309e9cf88c28d486ba6617b"
        },
        "app": {
            "exec": ["/server"],
            "user": "0",
            "group": "0",
            "environment": [{
                "name": "HEAPSTER_PORT_80_TCP_PORT",
                "value": "80"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PORT",
                "value": "80"
            }, {
                "name": "PATH",
                "value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            }, {
                "name": "KUBERNETES_PORT_443_TCP_PORT",
                "value": "443"
            }, {
                "name": "HEAPSTER_SERVICE_HOST",
                "value": "10.0.24.212"
            }, {
                "name": "KUBERNETES_PORT",
                "value": "tcp://10.0.0.1:443"
            }, {
                "name": "KUBE_DNS_SERVICE_PORT_DNS_TCP",
                "value": "53"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP_PROTO",
                "value": "udp"
            }, {
                "name": "KUBERNETES_SERVICE_HOST",
                "value": "10.0.0.1"
            }, {
                "name": "KUBERNETES_SERVICE_PORT",
                "value": "443"
            }, {
                "name": "KUBERNETES_PORT_443_TCP_ADDR",
                "value": "10.0.0.1"
            }, {
                "name": "KUBERNETES_DASHBOARD_SERVICE_PORT",
                "value": "80"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP",
                "value": "tcp://10.0.217.18:80"
            }, {
                "name": "KUBE_DNS_SERVICE_PORT",
                "value": "53"
            }, {
                "name": "HEAPSTER_PORT_80_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP_ADDR",
                "value": "10.0.0.10"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_SERVICE_HOST",
                "value": "10.0.107.21"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT",
                "value": "80"
            }, {
                "name": "KUBERNETES_PORT_443_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP_PORT",
                "value": "53"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP_ADDR",
                "value": "10.0.217.18"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP_PORT",
                "value": "53"
            }, {
                "name": "HEAPSTER_PORT_80_TCP_ADDR",
                "value": "10.0.24.212"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT",
                "value": "tcp://10.0.107.21:80"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP",
                "value": "tcp://10.0.107.21:80"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PORT",
                "value": "80"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "HEAPSTER_PORT_80_TCP",
                "value": "tcp://10.0.24.212:80"
            }, {
                "name": "KUBERNETES_SERVICE_PORT_HTTPS",
                "value": "443"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP",
                "value": "tcp://10.0.0.10:53"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "KUBERNETES_PORT_443_TCP",
                "value": "tcp://10.0.0.1:443"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP_ADDR",
                "value": "10.0.0.10"
            }, {
                "name": "KUBERNETES_DASHBOARD_SERVICE_HOST",
                "value": "10.0.217.18"
            }, {
                "name": "KUBE_DNS_SERVICE_HOST",
                "value": "10.0.0.10"
            }, {
                "name": "KUBE_DNS_PORT",
                "value": "udp://10.0.0.10:53"
            }, {
                "name": "HEAPSTER_SERVICE_PORT",
                "value": "80"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP",
                "value": "udp://10.0.0.10:53"
            }, {
                "name": "HEAPSTER_PORT",
                "value": "tcp://10.0.24.212:80"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT",
                "value": "tcp://10.0.217.18:80"
            }, {
                "name": "KUBE_DNS_SERVICE_PORT_DNS",
                "value": "53"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_ADDR",
                "value": "10.0.107.21"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT_HTTP",
                "value": "80"
            }],
            "mountPoints": [{
                "name": "default-token-601vm",
                "path": "/var/run/secrets/kubernetes.io/serviceaccount",
                "readOnly": true
            }, {
                "name": "termination-message-96427d8b-fc29-11e5-9686-42010af00005",
                "path": "/dev/termination-log"
            }],
            "ports": [{
                "name": "default-http-backend-tcp-8080",
                "protocol": "TCP",
                "port": 8080,
                "count": 1,
                "socketActivated": false
            }],
            "isolators": [{
                "name": "resource/cpu",
                "value": {
                    "default": false,
                    "request": "10m",
                    "limit": "10m"
                }
            }, {
                "name": "resource/memory",
                "value": {
                    "default": false,
                    "request": "20Mi",
                    "limit": "20Mi"
                }
            }]
        },
        "annotations": [{
            "name": "io.kubernetes.container.hash",
            "value": "2473234419"
        }, {
            "name": "io.kubernetes.container.termination-message-path",
            "value": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/default-http-backend/96427d8b-fc29-11e5-9686-42010af00005"
        }]
    }, {
        "name": "l7-lb-controller",
        "image": {
            "id": "sha512-598357eb4a7d964760c92043e4daaab8765327d5f9a4b6cfc8f62b692d9dcbcf"
        },
        "app": {
            "exec": ["/glbc", "--default-backend-service=kube-system/default-http-backend", "--sync-period=300s"],
            "user": "0",
            "group": "0",
            "environment": [{
                "name": "KUBE_DNS_SERVICE_PORT",
                "value": "53"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP_ADDR",
                "value": "10.0.0.10"
            }, {
                "name": "KUBERNETES_PORT_443_TCP",
                "value": "tcp://10.0.0.1:443"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT",
                "value": "tcp://10.0.217.18:80"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_SERVICE_HOST",
                "value": "10.0.107.21"
            }, {
                "name": "HEAPSTER_SERVICE_PORT",
                "value": "80"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT",
                "value": "80"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "KUBERNETES_PORT_443_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PORT",
                "value": "80"
            }, {
                "name": "KUBE_DNS_SERVICE_PORT_DNS_TCP",
                "value": "53"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT",
                "value": "tcp://10.0.107.21:80"
            }, {
                "name": "KUBERNETES_SERVICE_HOST",
                "value": "10.0.0.1"
            }, {
                "name": "DEBIAN_FRONTEND",
                "value": "noninteractive"
            }, {
                "name": "KUBERNETES_SERVICE_PORT_HTTPS",
                "value": "443"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_ADDR",
                "value": "10.0.107.21"
            }, {
                "name": "HEAPSTER_PORT",
                "value": "tcp://10.0.24.212:80"
            }, {
                "name": "KUBERNETES_PORT",
                "value": "tcp://10.0.0.1:443"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP",
                "value": "tcp://10.0.217.18:80"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP_PORT",
                "value": "53"
            }, {
                "name": "HEAPSTER_PORT_80_TCP_PORT",
                "value": "80"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "HEAPSTER_PORT_80_TCP_PROTO",
                "value": "tcp"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT_HTTP",
                "value": "80"
            }, {
                "name": "HEAPSTER_PORT_80_TCP_ADDR",
                "value": "10.0.24.212"
            }, {
                "name": "KUBERNETES_PORT_443_TCP_ADDR",
                "value": "10.0.0.1"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP_ADDR",
                "value": "10.0.0.10"
            }, {
                "name": "KUBE_DNS_PORT_53_TCP",
                "value": "tcp://10.0.0.10:53"
            }, {
                "name": "KUBE_DNS_SERVICE_PORT_DNS",
                "value": "53"
            }, {
                "name": "KUBERNETES_DASHBOARD_SERVICE_HOST",
                "value": "10.0.217.18"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP_ADDR",
                "value": "10.0.217.18"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP_PORT",
                "value": "53"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP",
                "value": "udp://10.0.0.10:53"
            }, {
                "name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP",
                "value": "tcp://10.0.107.21:80"
            }, {
                "name": "KUBE_DNS_SERVICE_HOST",
                "value": "10.0.0.10"
            }, {
                "name": "KUBE_DNS_PORT_53_UDP_PROTO",
                "value": "udp"
            }, {
                "name": "PATH",
                "value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            }, {
                "name": "HEAPSTER_PORT_80_TCP",
                "value": "tcp://10.0.24.212:80"
            }, {
                "name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PORT",
                "value": "80"
            }, {
                "name": "KUBERNETES_SERVICE_PORT",
                "value": "443"
            }, {
                "name": "KUBERNETES_DASHBOARD_SERVICE_PORT",
                "value": "80"
            }, {
                "name": "KUBE_DNS_PORT",
                "value": "udp://10.0.0.10:53"
            }, {
                "name": "KUBERNETES_PORT_443_TCP_PORT",
                "value": "443"
            }, {
                "name": "HEAPSTER_SERVICE_HOST",
                "value": "10.0.24.212"
            }],
            "mountPoints": [{
                "name": "default-token-601vm",
                "path": "/var/run/secrets/kubernetes.io/serviceaccount",
                "readOnly": true
            }, {
                "name": "termination-message-96447821-fc29-11e5-9686-42010af00005",
                "path": "/dev/termination-log"
            }],
            "isolators": [{
                "name": "resource/cpu",
                "value": {
                    "default": false,
                    "request": "100m",
                    "limit": "100m"
                }
            }, {
                "name": "resource/memory",
                "value": {
                    "default": false,
                    "request": "50Mi",
                    "limit": "100Mi"
                }
            }]
        },
        "annotations": [{
            "name": "io.kubernetes.container.hash",
            "value": "1429177749"
        }, {
            "name": "io.kubernetes.container.termination-message-path",
            "value": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/l7-lb-controller/96447821-fc29-11e5-9686-42010af00005"
        }]
    }],
    "volumes": [{
        "name": "termination-message-96427d8b-fc29-11e5-9686-42010af00005",
        "kind": "host",
        "source": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/default-http-backend/96427d8b-fc29-11e5-9686-42010af00005"
    }, {
        "name": "termination-message-96447821-fc29-11e5-9686-42010af00005",
        "kind": "host",
        "source": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/l7-lb-controller/96447821-fc29-11e5-9686-42010af00005"
    }, {
        "name": "default-token-601vm",
        "kind": "host",
        "source": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/volumes/kubernetes.io~secret/default-token-601vm"
    }],
    "isolators": null,
    "annotations": [{
        "name": "io.kubernetes.pod.managed-by-kubelet",
        "value": "true"
    }, {
        "name": "io.kubernetes.pod.uid",
        "value": "3c281b63-fc29-11e5-adc0-42010af00002"
    }, {
        "name": "io.kubernetes.pod.name",
        "value": "l7-lb-controller-v0.5.2-i1jjh"
    }, {
        "name": "io.kubernetes.pod.namespace",
        "value": "kube-system"
    }, {
        "name": "io.kubernetes.container.created",
        "value": "1459969124"
    }, {
        "name": "io.kubernetes.container.restart-count",
        "value": "1"
    }],
    "ports": [{
        "name": "default-http-backend-tcp-8080",
        "hostPort": 0
    }]
}

duplicating it with yifan 1.2.1 patched rkt and it works

spotter-rkt-minion-1pwy spotter # /opt/rkt/rkt run --pod-manifest manifest
image: using image from local store for image name coreos.com/rkt/stage1-coreos:1.2.1+gite568957
networking: loading networks from /etc/rkt/net.d
networking: loading network default with type ptp
[  449.509079] glbc[6]: I0406 19:01:37.714957       6 main.go:150] Starting GLBC image: 0.5.2
[  449.509645] glbc[6]: I0406 19:01:37.715403       6 main.go:220] Waiting for kube-system/default-http-backend
[  450.609713] glbc[6]: I0406 19:01:38.815729       6 main.go:229] Node port 32669
[  450.612195] glbc[6]: I0406 19:01:38.818199       6 gce.go:201] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
[  450.612894] glbc[6]: I0406 19:01:38.818935       6 controller.go:158] Created new loadbalancer controller
[  450.613577] glbc[6]: I0406 19:01:38.819618       6 controller.go:190] Starting loadbalancer controller
[  450.801168] glbc[6]: I0406 19:01:39.007124       6 utils.go:85] Syncing spotter-rkt-master
[  450.801718] glbc[6]: I0406 19:01:39.007752       6 utils.go:85] Syncing spotter-rkt-minion-1pwy
[  450.801973] glbc[6]: I0406 19:01:39.007814       6 utils.go:85] Syncing spotter-rkt-minion-5iix
[  450.802256] glbc[6]: I0406 19:01:39.007845       6 utils.go:85] Syncing spotter-rkt-minion-vyhe
^C^[^[^]^]Container rkt-7b597d86-f53d-442f-af15-fbf0dfeba235 terminated by signal KILL.

with 1.3.0 from github doesn't work

spotter-rkt-minion-1pwy spotter # /opt/rkt1/bin/rkt run --pod-manifest manifest
image: using image from file /usr/lib/rkt/stage1-images/stage1-coreos.aci
networking: loading networks from /etc/rkt/net.d
networking: loading network default with type ptp
[  487.205206] glbc[5]: I0406 19:02:15.411107       5 main.go:150] Starting GLBC image: 0.5.2
[  487.205864] glbc[5]: F0406 19:02:15.411899       5 main.go:165] Failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory.

@sjpotter
Copy link
Contributor

sjpotter commented Apr 6, 2016

when I dup;icate it on my host with 1`3.0 I get same error, but dont see it creating dirs on in my hosts /run/ so it doesn't seem like its the same thing, but something new

alban added a commit to kinvolk/rkt that referenced this pull request Apr 7, 2016
I tried to reproduce the issue mentioned in:
rkt#2290 (comment)

However, the tests pass fine for me. Nevertheless, this test is worth
adding.
@alban
Copy link
Member

alban commented Apr 7, 2016

@sjpotter I tried to reproduce the issue by adding a test in #2394. However I have not figured out the reproducible steps correctly because my test passes.

Could you file a new issue for this? This PR is merged so we might forget the discussion here.

@sjpotter
Copy link
Contributor

sjpotter commented Apr 7, 2016

will reproduce better once I finish seeing what happens w/ my current tests

@euank
Copy link
Member

euank commented Apr 7, 2016

I'm looking into this as well; I was only able to reproduce on a coreos alpha image, not my local machine. I'm kicking off a git bisect so I can better understand when this changed.

@euank
Copy link
Member

euank commented Apr 7, 2016

My reproduction no longer works; I realized that because I copied one of the templates for @sjpotter's instances to mimic his environment, it was joining to his kubernetes master and running additional pods. I think the dirty state I ended up in is required to reproduce this, and I have no clue how to get back into that state now that I've cleaned it.

@sjpotter
Copy link
Contributor

sjpotter commented Apr 7, 2016

I built 1.3 from scratch frm the latest git (along with the db locking patch) and can't duplicate it anymore.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants