Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

9P and/or virtio-fs support #26

Open
glance- opened this issue Oct 24, 2022 · 8 comments
Open

9P and/or virtio-fs support #26

glance- opened this issue Oct 24, 2022 · 8 comments

Comments

@glance-
Copy link

glance- commented Oct 24, 2022

I'd love to see support for either https://wiki.qemu.org/Documentation/9psetup or https://virtio-fs.gitlab.io/

Virtio-fs is a bit more modern than 9p but is a tiny bit more complex due to it using a external daemon for it's fs operations rather than having qemu doing that bit directly.

@ppggff
Copy link
Owner

ppggff commented Oct 25, 2022

I will try them later.

@glance-
Copy link
Author

glance- commented Oct 27, 2022

Even the user-mode SMB-support might be interesting to use for sharing files from the host to the VM.

@leifliddy
Copy link

leifliddy commented Apr 23, 2023

You can easily provide virtio-9p volumes via extra_qemu_args

  cur_dir = __dir__

  config.vm.synced_folder '.', '/vagrant', disabled: true

  qe.extra_qemu_args= "-fsdev local,id=vagrant_dev,path=#{cur_dir},security_model=mapped-xattr
	               -device virtio-9p-pci,fsdev=vagrant_dev,mount_tag=vagrant_mount".split

You do need to write the logic to mount the volumes though.
Probably best to deploy a script during the provisioning process to edit /etc/fstab
I'm working on that part now....

@leifliddy
Copy link

leifliddy commented Apr 25, 2023

Here's how I sorted this out

This config will create the following mounts within the vm

 vagrant_mount  /vagrant   9p  version=9p2000.L,posixacl,msize=104857600,cache=none  0  0  
 salt_mount     /srv/salt  9p  version=9p2000.L,posixacl,msize=104857600,cache=none  0  0 
cur_dir = __dir__
fstab_add_script = File.join(cur_dir, 'fstab_add.py')

# directory on host system
salt_dir  = '/somedir/srv/salt'
# mount point within vm
salt_mount = '/srv/salt'


  qe.extra_qemu_args= "-fsdev local,id=vagrant_dev,path=#{cur_dir},security_model=mapped-xattr
                       -device virtio-9p-pci,fsdev=vagrant_dev,mount_tag=vagrant_mount
                       -fsdev local,id=salt_dev,path=#{salt_dir},security_model=mapped-xattr
                       -device virtio-9p-pci,fsdev=salt_dev,mount_tag=salt_mount".split
                      

  config.vm.synced_folder cur_dir, '/vagrant', disabled: true
  config.vm.provision 'shell', path: fstab_add_script, args: ['vagrant_mount', '/vagrant']
  config.vm.provision 'shell', path: fstab_add_script, args: ['salt_mount', salt_mount]

The fstab_add.py script only requires the device and mountpoint args, if not specified the default values will be used for
fstype, options, dump, and passno
Ideally -- you should modify this script to suit your needs

  parser.add_argument('device')
  parser.add_argument('mountpoint')
  parser.add_argument('fstype',  nargs='?', default='9p')    
  parser.add_argument('options', nargs='?', default='version=9p2000.L,posixacl,msize=104857600,cache=none')
  parser.add_argument('dump',    nargs='?', default='0')
  parser.add_argument('passno',  nargs='?', default='0') 

Also fstab_add.py will create the mountpoint dir if it doesn't exist.

@leifliddy
Copy link

leifliddy commented Apr 25, 2023

Here are the contents of the fstab_add.py script.
This could be improved -- just a first draft that I hacked together

#!/usr/bin/python3

from typing import (
    NamedTuple, List
)

import argparse
import logging
import os
import subprocess
import sys


etc_fstab = '/etc/fstab'

log = logging.getLogger(__name__)

fstab_entry_type = NamedTuple(
    'fstab_entry_type', [
        ('fstype', str),
        ('mountpoint', str),
        ('device_spec', str),
        ('device_path', str),
        ('options', str),
        ('dump', str),
        ('fs_passno', str)
    ]
)


class Fstab:
    """
    **Managing fstab values**
    """
    def __init__(self):
        self.fstab = []


    def read(self, filename: str) -> None:
        """
        Import specified fstab file

        Read the given fstab file and initialize a new entry list

        :param string filename: path to a fstab file
        """
        self.fstab = []
        with open(filename) as fstab:
            for line in fstab.readlines():
                fstab_entry = line.split()
                self.add_entry(fstab_entry)


    def add_entry(self, fstab_entry: list, add_new_entry=False):
        new_entry = self._parse_entry(fstab_entry, add_new_entry=add_new_entry)
        if new_entry:
            for entry in self.fstab:
                if entry.mountpoint == new_entry.mountpoint:
                    log.warning(
                        'Mountpoint for "{0}" in use by "{1}", skipped'.format(
                            self._file_entry(new_entry),
                            self._file_entry(entry)
                        )
                    )
                    return

            self.fstab.append(new_entry)
            
            if add_new_entry:
                mountpoint_new = fstab_entry[1]
                return mountpoint_new


    def get_devices(self) -> List[fstab_entry_type]:
        return self.fstab


    def export(self, filename: str) -> None:
        """
        Export entries

        :param string filename: path to file name
        """

        with open(filename, 'w') as fstab:        
            for entry in self.fstab:
                fstab.write(
                    self._file_entry(entry) + os.linesep
                )        


    def export_pretty(self, filename: str) -> None:
        fstab_contents = []
        output = []

        for entry in self.fstab:
            row = [entry.device_spec, entry.mountpoint, entry.fstype, entry.options, entry.dump, entry.fs_passno, '\n']
            fstab_contents.append(row)

        col_width = [max(map(len, col)) for col in zip(*fstab_contents)]
        formatted_output = []

        for row in fstab_contents:
            formatted_output.append("  ".join((val.ljust(width) for val, width in zip(row, col_width))))

        with open(filename, 'w') as fstab:   
            fstab.write((''.join(formatted_output)))


    def _file_entry(self, entry):
        return '{0} {1} {2} {3} {4} {5}'.format(
            entry.device_spec, entry.mountpoint,
            entry.fstype, entry.options, entry.dump,
            entry.fs_passno
        )


    def _parse_entry(self, data_record, add_new_entry=False):
        data_length = len(data_record)
        if data_record and data_length >= 2 \
           and not data_record[0].startswith('#'):
            device = data_record[0]
            mountpoint = data_record[1]
 
            fstype = data_record[2]
            options = data_record[3]
            dump = data_record[4]
            fs_passno = data_record[5]

            if device.startswith('UUID'):
                device_path = ''.join(
                    ['/dev/disk/by-uuid/', device.split('=')[1]]
                )
            elif device.startswith('LABEL'):
                device_path = ''.join(
                    ['/dev/disk/by-label/', device.split('=')[1]]
                )
            elif device.startswith('PARTUUID'):
                device_path = ''.join(
                    ['/dev/disk/by-partuuid/', device.split('=')[1]]
                )
            else:
                device_path = device

            return fstab_entry_type(
                fstype=fstype,
                mountpoint=mountpoint,
                device_path=device_path,
                device_spec=device,
                options=options,
                dump=dump,
                fs_passno=fs_passno
            )


def create_dir(dir_path):
    if not os.path.isdir(dir_path):
        try:
            os.makedirs(dir_path)
            return True
        except Exception as e:
            log.error(f'Failed to create directory: {dir_path}\n{e}')
            sys.exit(1)


def mount(mountpoint):
        cmd_str = f'mount {mountpoint}'
        cmd = cmd_str.split()

        cmd_output = subprocess.run(cmd, universal_newlines=True)

        if cmd_output.returncode != 0:
            log.error(f'Error mounting {mountpoint}')
            sys.exit(2)


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument('device')
    parser.add_argument('mountpoint')
    parser.add_argument('fstype', nargs='?', default='9p')    
    parser.add_argument('options', nargs='?', default='version=9p2000.L,posixacl,msize=104857600,cache=none')
    parser.add_argument('dump', nargs='?', default='0')
    parser.add_argument('passno', nargs='?', default='0')        

    args = parser.parse_args()
    
    fstab_entry = [args.device, args.mountpoint, args.fstype, args.options, args.dump, args.passno]

    fstab = Fstab()
    fstab.read(etc_fstab)
    mountpoint_new = fstab.add_entry(fstab_entry, add_new_entry=True)
    if mountpoint_new:
        fstab.export_pretty(etc_fstab)
        create_dir(mountpoint_new)
        mount(mountpoint_new)

@zentavr
Copy link

zentavr commented Oct 4, 2023

@leifliddy The problem comes here when you need something from that mounted folder later.
i.e.:

Vagrant.configure("2") do |config|
   # some stuff here
      config.vm.provider 'qemu' do |qe, override|
     override.vm.box = $qe_box
     qe.arch = "x86_64"
     qe.machine = "q35"
     qe.net_device = "virtio-net-pci"
     qe.memory = $vm_mem
     qe.cpu = "Skylake-Server,+aes"
     qe.smp = "cpus=#{$vm_cpus},sockets=1,cores=#{$vm_cpus},threads=1"
     qe.no_daemonize = $vm_gui
     qe.qemu_dir = qemuSharedDir
     #qe.extra_qemu_args = %w(-accel hvf)

     # Inspired by: https://github.com/ppggff/vagrant-qemu/issues/26
     cur_dir = __dir__
     fstab_add_script = File.join(cur_dir, 'fstab_add.py')

     # Map host directory to /opt/build
     config.vm.synced_folder cur_dir, '/opt/build', disabled: true

     qe.extra_qemu_args= "-fsdev local,id=virtfs0,path=#{cur_dir},security_model=mapped-xattr
                          -device virtio-9p-pci,fsdev=virtfs0,mount_tag=vagrant_share".split

     # Invoke mount (use "mount_tag" value here) <-- Would be executed at the very latest
     config.vm.provision "fstab_vagrant_share", type: "shell",
       name: "fstab__vagrant_share",
       path: fstab_add_script,
       args: ["vagrant_share", '/opt/build']
  end

  #
  # Run Ansible from the Vagrant VM <-- this fails because there is no /opt/build so far
  #
  config.vm.provision "create_image", type: "ansible_local", run: "always" do |ansible|
    ansible.provisioning_path = "/opt/build"
    ansible.playbook          = "playbook.yml"
    #ansible.tags              = ""
    #ansible.skip_tags         = ""
    ansible.verbose           = "-vv"
    ansible.install           = true
    ansible.install_mode      = "pip"
    ansible.pip_install_cmd   = "curl -s https://bootstrap.pypa.io/get-pip.py | sudo python"
    ansible.version           = "2.9.27"
    # Drive type could be "mbr" or "gpt"
    ansible.extra_vars        = {
        image_path: "/opt/build/livecd",
        drive_type: "hybrid",
        burn_iso: true,
        burn_img: false
    }
  end

end

Probably an experimental feature should be used: https://developer.hashicorp.com/vagrant/docs/provisioning/basic_usage#dependency-provisioners

@leifliddy
Copy link

@zentavr It should create /opt/build directory during the provisioning process.
I would test that out with just vagrant itself before involving ansible.
If you run vagrant up --provision does it create /opt/build

def create_dir(dir_path):
    if not os.path.isdir(dir_path):
        try:
            os.makedirs(dir_path)
            return True
        except Exception as e:
            log.error(f'Failed to create directory: {dir_path}\n{e}')
            sys.exit(1)
.....
    if mountpoint_new:
        fstab.export_pretty(etc_fstab)
        create_dir(mountpoint_new)
        mount(mountpoint_new)            

@zentavr
Copy link

zentavr commented Oct 5, 2023

@leifliddy I'd done vagrant destroy. Then:

export VAGRANT_EXPERIMENTAL="dependency_provisioners"
vagrant --qemu-shared-dir=/usr/local/Cellar/qemu/8.1.1/share/qemu up --provider qemu --provision

My whole Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'getoptlong'

$vm_mem = '5120'
$vm_cpus = '2'
$vm_gui = false
$vb_box = 'ubuntu/jammy64'
$qe_box = 'generic/ubuntu2204'
$docker_box = 'ubuntu:jammy'
$vm_name = 'resecs-livecd'
#qemuSharedDir='/opt/homebrew/share/qemu'
qemuSharedDir=''

# https://stackoverflow.com/a/35777091/315044
# https://ruby-doc.org/stdlib-2.1.0/libdoc/getoptlong/rdoc/GetoptLong.html
opts = GetoptLong.new(
  [ '--qemu-shared-dir', '-S', GetoptLong::REQUIRED_ARGUMENT ], # With required parameter.
)
opts.ordering=(GetoptLong::REQUIRE_ORDER)

opts.each do |opt, arg|
  case opt
    when '--qemu-shared-dir'
      puts '--qemu-shared-dir accepted. Setting up QEMU shared dir.'
      qemuSharedDir=arg
  end
end

$script = <<-'SCRIPT'
#!/usr/bin/env bash

sudo rm -rf /usr/bin/python
sudo ln -s /usr/bin/python3 /usr/bin/python
sudo /usr/bin/python -V

echo "Installing python3-distutils"
sudo apt-get install -y python3-distutils

echo "Downloading pip"
curl -s https://bootstrap.pypa.io/get-pip.py | sudo python

echo "Done."
SCRIPT

Vagrant.configure("2") do |config|
   config.vm.box = $vb_box
   config.vm.hostname = $vm_name

   config.vm.provision "fstab_vagrant_share", type: "shell",
     preserve_order: true,
     path: "dummy.sh"

   config.vm.provider 'qemu' do |qe, override|
     override.vm.box = $qe_box
     qe.arch = "x86_64"
     qe.machine = "q35"
     qe.net_device = "virtio-net-pci"
     qe.memory = $vm_mem
     qe.cpu = "Skylake-Server,+aes"
     qe.smp = "cpus=#{$vm_cpus},sockets=1,cores=#{$vm_cpus},threads=1"
     qe.no_daemonize = $vm_gui
     qe.qemu_dir = qemuSharedDir
     #qe.extra_qemu_args = %w(-accel hvf)

     # Inspired by: https://github.com/ppggff/vagrant-qemu/issues/26
     cur_dir = __dir__
     fstab_add_script = File.join(cur_dir, 'fstab_add.py')

     # Map host directory to /opt/build
     config.vm.synced_folder cur_dir, '/opt/build', disabled: true

     qe.extra_qemu_args= "-fsdev local,id=virtfs0,path=#{cur_dir},security_model=mapped-xattr
                          -device virtio-9p-pci,fsdev=virtfs0,mount_tag=vagrant_share".split

     # Invoke mount (use "mount_tag" value here)
     override.vm.provision "fstab_vagrant_share", type: "shell",
       name: "fstab__vagrant_share",
       path: fstab_add_script,
       args: ["vagrant_share", '/opt/build']

  end

   config.vm.provider 'virtualbox' do |vb|
    vb.memory = $vm_mem
    vb.cpus = $vm_cpus
    vb.gui = $vm_gui
    vb.name = $vm_name

    # Map host directory to /opt/build
    #config.vm.synced_folder '.', '/opt/build', create: true, type: 'virtualbox', disabled: true
    config.vm.synced_folder '.', '/opt/build', create: true, type: 'virtualbox'
  end

  config.vm.provider "docker" do |d|
    d.image = $docker_box
    d.has_ssh = true
    d.volumes = [
      "./:/opt/build,rw"
    ]
  end

  config.vm.provision "shell", name: 'python__install', inline: $script

  #
  # Run Ansible from the Vagrant VM
  #
  config.vm.provision "create_image", type: "ansible_local", after: "fstab_vagrant_share", run: "always" do |ansible|
    ansible.provisioning_path = "/opt/build"
    ansible.playbook          = "playbook.yml"
    #ansible.tags              = ""
    #ansible.skip_tags         = ""
    ansible.verbose           = "-vv"
    ansible.install           = true
    ansible.install_mode      = "pip"
    ansible.pip_install_cmd   = "curl -s https://bootstrap.pypa.io/get-pip.py | sudo python"
    ansible.version           = "2.9.27"
    # Drive type could be "mbr" or "gpt"
    ansible.extra_vars        = {
        image_path: "/opt/build/livecd",
        drive_type: "hybrid",
        burn_iso: true,
        burn_img: false
    }
  end

end

Output:

...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Setting hostname...
==> default: Running provisioner: python__install (shell)...
    default: Running: script: python__install
...
...
==> default: Running provisioner: fstab_vagrant_share (shell)...
    default: Running: script: fstab__vagrant_share
==> default: Running provisioner: create_image (ansible_local)...
    default: Installing Ansible...
    default: Installing pip... (for Ansible installation)
....

How you can see, it works only because of these experimental workarounds. The doc says:

If you define provisioners at multiple "scope" levels (such as globally in the configuration block, then in a multi-machine definition, then maybe in a provider-specific override), then the outer scopes will always run before any inner scopes.

Seems like in my scenario it will work only with the hacks I'd put (and probably I'd broken Virtualbox emulator here).
Just FYI: We have a logic which builds a LiveCD based on Ubuntu 22.04 with the custom software. The developers have Macs M1/M2, someone sit with Intel. CI/CD stuff is amd64 as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants