Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube mount fails with Bad Address #16016

Closed
sshort opened this issue Mar 10, 2023 · 14 comments
Closed

minikube mount fails with Bad Address #16016

sshort opened this issue Mar 10, 2023 · 14 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sshort
Copy link

sshort commented Mar 10, 2023

What Happened?

Attempting to mount a local user folder from my Fedora 37 VM into the minikube container with:

minikube --alsologtostderr mount /home/steve/dev/k/localstack/ready.d:/ready.d

Results in

mount: /ready.d: mount(2) system call failed: Bad address.

logs.txt output attached below

stderr output:

0310 10:15:56.905854 16134 out.go:296] Setting OutFile to fd 1 ...
I0310 10:15:56.906014 16134 out.go:348] isatty.IsTerminal(1) = true
I0310 10:15:56.906030 16134 out.go:309] Setting ErrFile to fd 2...
I0310 10:15:56.906036 16134 out.go:348] isatty.IsTerminal(2) = true
I0310 10:15:56.906106 16134 root.go:334] Updating PATH: /home/steve/.minikube/bin
I0310 10:15:56.906278 16134 mustload.go:65] Loading cluster: minikube
I0310 10:15:56.906531 16134 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0310 10:15:56.906885 16134 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0310 10:15:56.972775 16134 host.go:66] Checking if "minikube" exists ...
I0310 10:15:56.973309 16134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0310 10:15:57.076339 16134 info.go:266] docker info: {ID:CCCC:NQKR:YDEH:R2KG:W6NP:D4PZ:RZX4:MWVR:PFMI:RZZX:MO5M:SZAN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:801 Driver:btrfs DriverStatus:[[Btrfs ]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:false NGoroutines:40 SystemTime:2023-03-10 10:15:57.068017369 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.1.14-200.fc37.x86_64 OperatingSystem:Fedora Linux 37 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://devnexus.engineering.clearswift.org:8091/] Secure:true Official:true}} Mirrors:[https://devnexus.engineering.clearswift.org:8091/]} NCPU:4 MemTotal:16340000768 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:fedora Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:}}
I0310 10:15:57.076531 16134 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0310 10:15:57.133380 16134 out.go:177]  Mounting host path /home/steve/dev/k/localstack/ready.d into VM as /ready.d ...
 Mounting host path /home/steve/dev/k/localstack/ready.d into VM as /ready.d ...
I0310 10:15:57.133475 16134 out.go:177] ▪ Mount type:
▪ Mount type:
I0310 10:15:57.133514 16134 out.go:177] ▪ User ID: docker
▪ User ID: docker
I0310 10:15:57.133591 16134 out.go:177] ▪ Group ID: docker
▪ Group ID: docker
I0310 10:15:57.133657 16134 out.go:177] ▪ Version: 9p2000.L
▪ Version: 9p2000.L
I0310 10:15:57.133707 16134 out.go:177] ▪ Message Size: 262144
▪ Message Size: 262144
I0310 10:15:57.133752 16134 out.go:177] ▪ Options: map[]
▪ Options: map[]
I0310 10:15:57.133797 16134 out.go:177] ▪ Bind Address: 192.168.49.1:43041
▪ Bind Address: 192.168.49.1:43041
I0310 10:15:57.133864 16134 out.go:177]  Userspace file server:
 Userspace file server: I0310 10:15:57.133904 16134 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /ready.d | grep /ready.d)" != "x" ] && sudo umount -f /ready.d || echo "
ufs starting
I0310 10:15:57.133979 16134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0310 10:15:57.206056 16134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/steve/.minikube/machines/minikube/id_rsa Username:docker}
I0310 10:15:57.296932 16134 mount.go:168] unmount for /ready.d ran successfully
I0310 10:15:57.296980 16134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /ready.d"
I0310 10:15:57.304618 16134 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43041,trans=tcp,version=9p2000.L 192.168.49.1 /ready.d"
I0310 10:15:59.320991 16134 ssh_runner.go:235] Completed: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43041,trans=tcp,version=9p2000.L 192.168.49.1 /ready.d": (2.016306594s)
I0310 10:15:59.321684 16134 out.go:177]

W0310 10:15:59.321934 16134 out.go:239] ❌ Exiting due to GUEST_MOUNT: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43041,trans=tcp,version=9p2000.L 192.168.49.1 /ready.d" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43041,trans=tcp,version=9p2000.L 192.168.49.1 /ready.d": Process exited with status 32
stdout:

stderr:
mount: /ready.d: mount(2) system call failed: Bad address.

❌ Exiting due to GUEST_MOUNT: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43041,trans=tcp,version=9p2000.L 192.168.49.1 /ready.d" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43041,trans=tcp,version=9p2000.L 192.168.49.1 /ready.d": Process exited with status 32
stdout:

stderr:
mount: /ready.d: mount(2) system call failed: Bad address.

W0310 10:15:59.321991 16134 out.go:239]

W0310 10:15:59.325004 16134 out.go:239]
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│  If the above advice does not help, please let us know: │
│  https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ Please also attach the following file to the GitHub issue: │
│ - /tmp/minikube_mount_365fe90b8ff07052dfd9bceadf95c07c3266b437_0.log │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Attach the log file

logs.txt

Operating System

Redhat/Fedora

Driver

Docker

@pjnssn
Copy link

pjnssn commented Mar 10, 2023

Having the exact same issue on Fedora 37. Doesn't matter which driver etc I use

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 11, 2023

Seems something is different between Fedora and Ubuntu

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 11, 2023

One thing to try would be a different --mount-9p-version

Like 9p2000.u (it was upgraded to 9p2000.L in ##3796)

@afbjorklund afbjorklund added the kind/bug Categorizes issue or PR as related to a bug. label Mar 11, 2023
@pjnssn
Copy link

pjnssn commented Mar 11, 2023

One thing to try would be a different --mount-9p-version

Like 9p2000.u (it was upgraded to 9p2000.L in ##3796)

Hi, I tried with version 9p2000.u but got the same result

@pjnssn
Copy link

pjnssn commented Mar 11, 2023

I just tried running Fedora 36 in a VM and with that mounting works, so something that changed between Fedora 36 -> 37 seems to be causing the issue

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 11, 2023

Probably some kernel change in Fedora then, that affected the 9p filesystem... Since the docker-in-docker driver inherits the kernel, it is doing the mount.

EDIT: I think it did a massive upgrade, from Linux 5 to Linux 6.

@afbjorklund
Copy link
Collaborator

All protocol errors (read/write) seem to be returning EFAULT, so I guess that the underlying error could be anything regarding the 9p transport ?

@pjnssn
Copy link

pjnssn commented Mar 11, 2023

Probably some kernel change in Fedora then, that affected the 9p filesystem... Since the docker-in-docker driver inherits the kernel, it is doing the mount.

EDIT: I think it did a massive upgrade, from Linux 5 to Linux 6.

Ended up going back to Fedora 36 as a short term solution for myself.

Even with the 6.1.15 kernel (6.1.15-100.fc36.x86_64) it works on F36. I don't see any other thing it could be than something related to 9p since the troubleshooting I did when I was still on F37 involved using different drivers for minikube (docker and kvm2) and even testing the underlying file systems (different docker drivers as well as completely different filesystems, both BTRFS and EXT4 with and without LUKS) and none of them worked on F37.

Edit: Oh and when running 'minikube start --mount' it works and correct me if Im wrong but that uses a different method than 9p as done with 'minikube mount'

@pjnssn
Copy link

pjnssn commented Apr 8, 2023

Seems to be working now with minikube 1.30.0 and 1.30.1 with Fedora 37

@sshort
Copy link
Author

sshort commented Apr 11, 2023

I can confirm that this works for me with minikube 1.30.1 on Fedora 37.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants