-
Notifications
You must be signed in to change notification settings - Fork 883
stage1: Temporary fix for symlink mount issue. #2290
Conversation
temporal -> temporary |
@@ -565,6 +565,50 @@ func PodToSystemd(p *stage1commontypes.Pod, interactive bool, flavor string, pri | |||
return nil | |||
} | |||
|
|||
// evaluateAppMountPath tries to resolve symlinks within the path. | |||
// It returns the actual relative path for the given path. | |||
// TODO(yifan): This is a temporal fix for systemd-nspawn not handling symlink mounts well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
temporal -> temporary
5010f43
to
ce5d7d0
Compare
@@ -565,6 +565,54 @@ func PodToSystemd(p *stage1commontypes.Pod, interactive bool, flavor string, pri | |||
return nil | |||
} | |||
|
|||
// evaluateAppMountPath tries to resolve symlinks within the path. | |||
// It returns the actual relative path for the given path. | |||
// TODO(yifan): This is a temporary fix for systemd-nspawn not handling symlink mounts well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an upstream systemd issue we can reference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alban @jonboulle Added the reference to the issue.
appRootfs := common.AppRootfsPath(absRoot, appName) | ||
mntPath, err := evaluateAppMountPath(appRootfs, m.Path) | ||
if err != nil { | ||
return nil, fmt.Errorf("could not evaluate path %v: %v", m.Path, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
errwrap?
How about we fix the |
That's ok for me. |
OK. The rest LGTM so let's merge this. |
stage1: Temporary fix for symlink mount issue.
Follow-up: #2298 |
This is an issue again. on my host
in the container
investigating what happened |
yea, something in 1.3.0 (possibly docker volume support?) broke this as yifan's 1.2.1 build with his fix works fine, but 1.3.0 pulled off github does not |
@sjpotter what |
{
"acVersion": "0.7.4+git",
"acKind": "PodManifest",
"apps": [{
"name": "default-http-backend",
"image": {
"id": "sha512-d8bdd604d73976278dd46436602388d6edde3ceb6309e9cf88c28d486ba6617b"
},
"app": {
"exec": ["/server"],
"user": "0",
"group": "0",
"environment": [{
"name": "HEAPSTER_PORT_80_TCP_PORT",
"value": "80"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PORT",
"value": "80"
}, {
"name": "PATH",
"value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
}, {
"name": "KUBERNETES_PORT_443_TCP_PORT",
"value": "443"
}, {
"name": "HEAPSTER_SERVICE_HOST",
"value": "10.0.24.212"
}, {
"name": "KUBERNETES_PORT",
"value": "tcp://10.0.0.1:443"
}, {
"name": "KUBE_DNS_SERVICE_PORT_DNS_TCP",
"value": "53"
}, {
"name": "KUBE_DNS_PORT_53_UDP_PROTO",
"value": "udp"
}, {
"name": "KUBERNETES_SERVICE_HOST",
"value": "10.0.0.1"
}, {
"name": "KUBERNETES_SERVICE_PORT",
"value": "443"
}, {
"name": "KUBERNETES_PORT_443_TCP_ADDR",
"value": "10.0.0.1"
}, {
"name": "KUBERNETES_DASHBOARD_SERVICE_PORT",
"value": "80"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP",
"value": "tcp://10.0.217.18:80"
}, {
"name": "KUBE_DNS_SERVICE_PORT",
"value": "53"
}, {
"name": "HEAPSTER_PORT_80_TCP_PROTO",
"value": "tcp"
}, {
"name": "KUBE_DNS_PORT_53_TCP_ADDR",
"value": "10.0.0.10"
}, {
"name": "DEFAULT_HTTP_BACKEND_SERVICE_HOST",
"value": "10.0.107.21"
}, {
"name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT",
"value": "80"
}, {
"name": "KUBERNETES_PORT_443_TCP_PROTO",
"value": "tcp"
}, {
"name": "KUBE_DNS_PORT_53_UDP_PORT",
"value": "53"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP_ADDR",
"value": "10.0.217.18"
}, {
"name": "KUBE_DNS_PORT_53_TCP_PORT",
"value": "53"
}, {
"name": "HEAPSTER_PORT_80_TCP_ADDR",
"value": "10.0.24.212"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PROTO",
"value": "tcp"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT",
"value": "tcp://10.0.107.21:80"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP",
"value": "tcp://10.0.107.21:80"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PORT",
"value": "80"
}, {
"name": "KUBE_DNS_PORT_53_TCP_PROTO",
"value": "tcp"
}, {
"name": "HEAPSTER_PORT_80_TCP",
"value": "tcp://10.0.24.212:80"
}, {
"name": "KUBERNETES_SERVICE_PORT_HTTPS",
"value": "443"
}, {
"name": "KUBE_DNS_PORT_53_TCP",
"value": "tcp://10.0.0.10:53"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PROTO",
"value": "tcp"
}, {
"name": "KUBERNETES_PORT_443_TCP",
"value": "tcp://10.0.0.1:443"
}, {
"name": "KUBE_DNS_PORT_53_UDP_ADDR",
"value": "10.0.0.10"
}, {
"name": "KUBERNETES_DASHBOARD_SERVICE_HOST",
"value": "10.0.217.18"
}, {
"name": "KUBE_DNS_SERVICE_HOST",
"value": "10.0.0.10"
}, {
"name": "KUBE_DNS_PORT",
"value": "udp://10.0.0.10:53"
}, {
"name": "HEAPSTER_SERVICE_PORT",
"value": "80"
}, {
"name": "KUBE_DNS_PORT_53_UDP",
"value": "udp://10.0.0.10:53"
}, {
"name": "HEAPSTER_PORT",
"value": "tcp://10.0.24.212:80"
}, {
"name": "KUBERNETES_DASHBOARD_PORT",
"value": "tcp://10.0.217.18:80"
}, {
"name": "KUBE_DNS_SERVICE_PORT_DNS",
"value": "53"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_ADDR",
"value": "10.0.107.21"
}, {
"name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT_HTTP",
"value": "80"
}],
"mountPoints": [{
"name": "default-token-601vm",
"path": "/var/run/secrets/kubernetes.io/serviceaccount",
"readOnly": true
}, {
"name": "termination-message-96427d8b-fc29-11e5-9686-42010af00005",
"path": "/dev/termination-log"
}],
"ports": [{
"name": "default-http-backend-tcp-8080",
"protocol": "TCP",
"port": 8080,
"count": 1,
"socketActivated": false
}],
"isolators": [{
"name": "resource/cpu",
"value": {
"default": false,
"request": "10m",
"limit": "10m"
}
}, {
"name": "resource/memory",
"value": {
"default": false,
"request": "20Mi",
"limit": "20Mi"
}
}]
},
"annotations": [{
"name": "io.kubernetes.container.hash",
"value": "2473234419"
}, {
"name": "io.kubernetes.container.termination-message-path",
"value": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/default-http-backend/96427d8b-fc29-11e5-9686-42010af00005"
}]
}, {
"name": "l7-lb-controller",
"image": {
"id": "sha512-598357eb4a7d964760c92043e4daaab8765327d5f9a4b6cfc8f62b692d9dcbcf"
},
"app": {
"exec": ["/glbc", "--default-backend-service=kube-system/default-http-backend", "--sync-period=300s"],
"user": "0",
"group": "0",
"environment": [{
"name": "KUBE_DNS_SERVICE_PORT",
"value": "53"
}, {
"name": "KUBE_DNS_PORT_53_TCP_ADDR",
"value": "10.0.0.10"
}, {
"name": "KUBERNETES_PORT_443_TCP",
"value": "tcp://10.0.0.1:443"
}, {
"name": "KUBERNETES_DASHBOARD_PORT",
"value": "tcp://10.0.217.18:80"
}, {
"name": "DEFAULT_HTTP_BACKEND_SERVICE_HOST",
"value": "10.0.107.21"
}, {
"name": "HEAPSTER_SERVICE_PORT",
"value": "80"
}, {
"name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT",
"value": "80"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PROTO",
"value": "tcp"
}, {
"name": "KUBERNETES_PORT_443_TCP_PROTO",
"value": "tcp"
}, {
"name": "KUBE_DNS_PORT_53_TCP_PROTO",
"value": "tcp"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PORT",
"value": "80"
}, {
"name": "KUBE_DNS_SERVICE_PORT_DNS_TCP",
"value": "53"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT",
"value": "tcp://10.0.107.21:80"
}, {
"name": "KUBERNETES_SERVICE_HOST",
"value": "10.0.0.1"
}, {
"name": "DEBIAN_FRONTEND",
"value": "noninteractive"
}, {
"name": "KUBERNETES_SERVICE_PORT_HTTPS",
"value": "443"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_ADDR",
"value": "10.0.107.21"
}, {
"name": "HEAPSTER_PORT",
"value": "tcp://10.0.24.212:80"
}, {
"name": "KUBERNETES_PORT",
"value": "tcp://10.0.0.1:443"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP",
"value": "tcp://10.0.217.18:80"
}, {
"name": "KUBE_DNS_PORT_53_TCP_PORT",
"value": "53"
}, {
"name": "HEAPSTER_PORT_80_TCP_PORT",
"value": "80"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP_PROTO",
"value": "tcp"
}, {
"name": "HEAPSTER_PORT_80_TCP_PROTO",
"value": "tcp"
}, {
"name": "DEFAULT_HTTP_BACKEND_SERVICE_PORT_HTTP",
"value": "80"
}, {
"name": "HEAPSTER_PORT_80_TCP_ADDR",
"value": "10.0.24.212"
}, {
"name": "KUBERNETES_PORT_443_TCP_ADDR",
"value": "10.0.0.1"
}, {
"name": "KUBE_DNS_PORT_53_UDP_ADDR",
"value": "10.0.0.10"
}, {
"name": "KUBE_DNS_PORT_53_TCP",
"value": "tcp://10.0.0.10:53"
}, {
"name": "KUBE_DNS_SERVICE_PORT_DNS",
"value": "53"
}, {
"name": "KUBERNETES_DASHBOARD_SERVICE_HOST",
"value": "10.0.217.18"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP_ADDR",
"value": "10.0.217.18"
}, {
"name": "KUBE_DNS_PORT_53_UDP_PORT",
"value": "53"
}, {
"name": "KUBE_DNS_PORT_53_UDP",
"value": "udp://10.0.0.10:53"
}, {
"name": "DEFAULT_HTTP_BACKEND_PORT_80_TCP",
"value": "tcp://10.0.107.21:80"
}, {
"name": "KUBE_DNS_SERVICE_HOST",
"value": "10.0.0.10"
}, {
"name": "KUBE_DNS_PORT_53_UDP_PROTO",
"value": "udp"
}, {
"name": "PATH",
"value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
}, {
"name": "HEAPSTER_PORT_80_TCP",
"value": "tcp://10.0.24.212:80"
}, {
"name": "KUBERNETES_DASHBOARD_PORT_80_TCP_PORT",
"value": "80"
}, {
"name": "KUBERNETES_SERVICE_PORT",
"value": "443"
}, {
"name": "KUBERNETES_DASHBOARD_SERVICE_PORT",
"value": "80"
}, {
"name": "KUBE_DNS_PORT",
"value": "udp://10.0.0.10:53"
}, {
"name": "KUBERNETES_PORT_443_TCP_PORT",
"value": "443"
}, {
"name": "HEAPSTER_SERVICE_HOST",
"value": "10.0.24.212"
}],
"mountPoints": [{
"name": "default-token-601vm",
"path": "/var/run/secrets/kubernetes.io/serviceaccount",
"readOnly": true
}, {
"name": "termination-message-96447821-fc29-11e5-9686-42010af00005",
"path": "/dev/termination-log"
}],
"isolators": [{
"name": "resource/cpu",
"value": {
"default": false,
"request": "100m",
"limit": "100m"
}
}, {
"name": "resource/memory",
"value": {
"default": false,
"request": "50Mi",
"limit": "100Mi"
}
}]
},
"annotations": [{
"name": "io.kubernetes.container.hash",
"value": "1429177749"
}, {
"name": "io.kubernetes.container.termination-message-path",
"value": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/l7-lb-controller/96447821-fc29-11e5-9686-42010af00005"
}]
}],
"volumes": [{
"name": "termination-message-96427d8b-fc29-11e5-9686-42010af00005",
"kind": "host",
"source": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/default-http-backend/96427d8b-fc29-11e5-9686-42010af00005"
}, {
"name": "termination-message-96447821-fc29-11e5-9686-42010af00005",
"kind": "host",
"source": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/containers/l7-lb-controller/96447821-fc29-11e5-9686-42010af00005"
}, {
"name": "default-token-601vm",
"kind": "host",
"source": "/var/lib/kubelet/pods/3c281b63-fc29-11e5-adc0-42010af00002/volumes/kubernetes.io~secret/default-token-601vm"
}],
"isolators": null,
"annotations": [{
"name": "io.kubernetes.pod.managed-by-kubelet",
"value": "true"
}, {
"name": "io.kubernetes.pod.uid",
"value": "3c281b63-fc29-11e5-adc0-42010af00002"
}, {
"name": "io.kubernetes.pod.name",
"value": "l7-lb-controller-v0.5.2-i1jjh"
}, {
"name": "io.kubernetes.pod.namespace",
"value": "kube-system"
}, {
"name": "io.kubernetes.container.created",
"value": "1459969124"
}, {
"name": "io.kubernetes.container.restart-count",
"value": "1"
}],
"ports": [{
"name": "default-http-backend-tcp-8080",
"hostPort": 0
}]
} duplicating it with yifan 1.2.1 patched rkt and it works
with 1.3.0 from github doesn't work
|
when I dup;icate it on my host with 1`3.0 I get same error, but dont see it creating dirs on in my hosts /run/ so it doesn't seem like its the same thing, but something new |
I tried to reproduce the issue mentioned in: rkt#2290 (comment) However, the tests pass fine for me. Nevertheless, this test is worth adding.
will reproduce better once I finish seeing what happens w/ my current tests |
I'm looking into this as well; I was only able to reproduce on a coreos alpha image, not my local machine. I'm kicking off a git bisect so I can better understand when this changed. |
My reproduction no longer works; I realized that because I copied one of the templates for @sjpotter's instances to mimic his environment, it was joining to his kubernetes master and running additional pods. I think the dirty state I ended up in is required to reproduce this, and I have no clue how to get back into that state now that I've cleaned it. |
I built 1.3 from scratch frm the latest git (along with the db locking patch) and can't duplicate it anymore. |
nspawn doesn't handle a mount operation correctly if the container path contains a symlink.
This tries to fix the problem in rkt by evaluating the mount path and resolve any symlinks before passing the mount path to nspawn.
cc @alban @iaguis @jonboulle @sjpotter