-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to instantiate LXD container ver 5.8.1 #3283
Comments
Hello, we fixed a bug related to your specific setup, which is what happens here, however, it happens after the container fails to start. The log says LXD complains about missing the |
I'm closing this since the fix is already submitted, if you still experience the issue with the code of https://github.com/OpenNebula/one/tree/one-5.8 we could reopen it |
Hi thank you for the fix. I have found the problem with '/sbin/init'. It is because the image from 'linuxcontainers.org' is in raw format, and I downloaded into qcow2 datastore which messed up the internal of the file. I switched it back to shared mode and set the LXD_SECURITY_PRIVILEGED to "false". For some reason, I am still experiencing problem deploy the container. Do you think there might be some problem with my LXD configuration ? Template: Sunstone log: Mon Apr 29 16:53:23 2019 [Z0][VMM][I]: ExitCode: 1 Mon Apr 29 16:53:23 2019 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy. Mon Apr 29 16:53:23 2019 [Z0][VMM][E]: Error deploying virtual machine Mon Apr 29 16:53:23 2019 [Z0][VM][I]: New LCM state is BOOT_FAILURE lxc log |
Well, the marketplace images are meant to be run with |
hmm could you please advice what should I do if I need to run unprivileged container ? |
well, right now, the only choice would be to create your own image, since we are facing this issue |
Thank you very much. Looking forward to the next release :) |
Description
I have followed the official documentation step by step to install the front end and compute node for LXD. I am not able to instantiate the LXD container.
To Reproduce
Front end:
ubuntu 18.04
opennebula 5.8.1
host file system: btrfs
Compute node:
ubuntu 18.04
opennebula 5.8.1
LXD 3.0 apt pakage
host file system: btrfs
storage backend:
NFS
datastores :
file datastore type: shared mode
image datastore type: qcow2
system datastore type: qcow2
each datastore under '/var/lib/one//datastores/' is symbolically linked to a directories created under '/mnt/NFS/'. The ownership of each directory is set to oneadmin
In the sunstone, the capacities of the datastores are correctly displayed.
LXD image:
'ubuntu_bionic - LXD' downloaded from 'linux containers'
Tried to instantiate the container from 'ubuntu_bionic - LXD' image with all default values without an NIC.
I am not sure if it is a bug or incorrect configuration of the directory permissions.
Expected behavior
A successful instantiation of LXD container
Details
Additional context
Add any other context about the problem here.
deployment log:
Sun Apr 28 21:54:04 2019 [Z0][VM][I]: New state is ACTIVE
'Sun Apr 28 21:54:04 2019 [Z0][VM][I]: New LCM state is PROLOG
Sun Apr 28 21:54:07 2019 [Z0][VM][I]: New LCM state is BOOT
Sun Apr 28 21:54:07 2019 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/19/deployment.0
Sun Apr 28 21:54:09 2019 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Sun Apr 28 21:54:09 2019 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy '/var/lib/one//datastores/108/19/deployment.0' 'compute2' 19 compute2
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/108/19/disk.0
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/lxd/storage-pools/default/containers/one-19/rootfs using device /dev/nbd0
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Mounting /dev/nbd0 at /var/lib/lxd/storage-pools/default/containers/one-19/rootfs
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Mapping disk at /mnt/NFS/108/19/mapper/disk.1 using device /dev/loop3
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Mounting /dev/loop3 at /mnt/NFS/108/19/mapper/disk.1
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/108/19/disk.0
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: deploy: Unmapping disk at /var/lib/lxd/storage-pools/default/containers/one-19/rootfs
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/mapper.rb:203:in
realpath': Permission denied @ realpath_rec - /var/lib/lxd/storage-pools/default/containers/one-19/rootfs (Errno::EACCES) Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/mapper.rb:203:in
unmap'Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:359:in
public_send' Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:359:in
setup_disk'Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:241:in
block in setup_storage' Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:234:in
each'Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/container.rb:234:in
setup_storage' Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/deploy:78:in
rescue inSun Apr 28 21:54:13 2019 [Z0][VMM][I]: from /var/tmp/one/vmm/lxd/deploy:74:in `'
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: ExitCode: 1
Sun Apr 28 21:54:13 2019 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Sun Apr 28 21:54:13 2019 [Z0][VMM][E]: Error deploying virtual machine
Sun Apr 28 21:54:13 2019 [Z0][VM][I]: New LCM state is BOOT_FAILURE
lxc.log:
lxc one-19 20190428195412.755 WARN conf - conf.c:lxc_setup_devpts:1616 - Invalid argument - F ailed to unmount old devpts instance
lxc one-19 20190428195412.784 ERROR start - start.c:start:2028 - No such file or directory - F ailed to exec "/sbin/init"
lxc one-19 20190428195412.784 ERROR sync - sync.c:__sync_wait:62 - An error occurred in anothe r process (expected sequence number 7)
lxc one-19 20190428195412.784 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:842 - Received container state "ABORTING" instead of "RUNNING"
lxc one-19 20190428195412.785 ERROR start - start.c:__lxc_start:1939 - Failed to spawn contain er "one-19"
lxc 20190428195412.786 WARN commands - commands.c:lxc_cmd_rsp_recv:132 - Connection reset by peer - Failed to receive response for command "get_state"
Progress Status
The text was updated successfully, but these errors were encountered: