Skip to content

Commit

Permalink
improve documentation on testing with qemu (#797)
Browse files Browse the repository at this point in the history
Pull out the qemu testing documentation into a separate file, and expand
it with step-by-step instructions on how to use the gpio example.

In addition switch to the jlevon/master.vfio-user branch, which is much
more up to date than the previous Oracle series.

Signed-off-by: John Levon <john.levon@nutanix.com>
  • Loading branch information
jlevon authored May 15, 2024
1 parent 324f74c commit bedaf99
Show file tree
Hide file tree
Showing 2 changed files with 122 additions and 53 deletions.
63 changes: 10 additions & 53 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,34 +202,6 @@ After a couple of seconds the client will start live migration. The source
server will exit and the destination server will start, watch the client
terminal for destination server messages.

gpio
----

A [gpio](./samples/gpio-pci-idio-16.c) server implements a very simple GPIO
device that can be used with a Linux VM.

Start the `gpio` server process:

```
rm /tmp/vfio-user.sock
./build/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock &
```

Next, build `qemu` and start a VM, as described below.

Log in to your guest VM. You'll probably need to build the `gpio-pci-idio-16`
kernel module yourself - it's part of the standard Linux kernel, but not usually
built and shipped on x86.

Once built, you should be able to load the module and observe the emulated GPIO
device's pins:

```
insmod gpio-pci-idio-16.ko
cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export
for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done
```

shadow_ioeventfd_server
-----------------------

Expand All @@ -241,34 +213,11 @@ demonstrate the benefits of shadow ioeventfd, see
Other usage notes
=================

Live migration
--------------

The `master` branch of `libvfio-user` implements live migration with a protocol
based on vfio's v2 protocol. Currently, there is no support for this in any qemu
client. For current use cases that support live migration, such as SPDK, you
should refer to the [migration-v1 branch](https://github.com/nutanix/libvfio-user/tree/migration-v1).

qemu
----

`vfio-user` client support is not yet merged into `qemu`. Instead, download and
build [this branch of qemu](https://github.com/oracle/qemu/tree/vfio-user-6.2).

Create a Linux install image, or use a pre-made one.

Then, presuming you have a `libvfio-user` server listening on the UNIX socket
`/tmp/vfio-user.sock`, you can start your guest VM with something like this:

```
./x86_64-softmmu/qemu-system-x86_64 -mem-prealloc -m 256 \
-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/gpio,share=yes,size=256M \
-numa node,memdev=ram-node0 \
-kernel ~/vmlinuz -initrd ~/initrd -nographic \
-append "console=ttyS0 root=/dev/sda1 single" \
-hda ~/bionic-server-cloudimg-amd64-0.raw \
-device vfio-user-pci,socket=/tmp/vfio-user.sock
```
Step-by-step instructions for using `libvfio-user` with `qemu` can be [found
here](docs/qemu.md).

SPDK
----
Expand Down Expand Up @@ -302,6 +251,14 @@ You can configure `vfio-user` devices in a `libvirt` domain configuration:
</qemu:commandline>
```

Live migration
--------------

The `master` branch of `libvfio-user` implements live migration with a protocol
based on vfio's v2 protocol. Currently, there is no support for this in any qemu
client. For current use cases that support live migration, such as SPDK, you
should refer to the [migration-v1 branch](https://github.com/nutanix/libvfio-user/tree/migration-v1).

History
=======

Expand Down
112 changes: 112 additions & 0 deletions docs/qemu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
qemu usage walkthrough
======================

In this walk-through, we'll use an Ubuntu cloudimg along with the
[gpio sample server](../samples/gpio-pci-idio-16.c) to emulate a very simple GPIO
device.

Building qemu
-------------

`vfio-user` client support is not yet merged into `qemu`. Instead, download and
build [jlevon's master.vfio-user branch of
qemu](https://github.com/jlevon/qemu/tree/master.vfio-user); for example:

```
git clone -b master.vfio-user git@github.com:jlevon/qemu.git
cd qemu
./configure --prefix=/usr --enable-kvm --enable-vnc --target-list=x86_64-softmmu --enable-debug --enable-vfio-user-client
make -j
```

Configuring the cloudimg
------------------------

Set up the necessary metadata files:

```
sudo apt install cloud-image-utils
$ cat metadata.yaml
instance-id: iid-local01
local-hostname: cloudimg
$ cat user-data.yaml
#cloud-config
ssh_import_id:
- gh:jlevon
cloud-localds seed.img user-data.yaml metadata.yaml
```

don't forget to replace `jlevon` with *your* github user name.

Starting the server
-------------------

Start the `gpio` server process:

```
rm -f /tmp/vfio-user.sock
./build/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock &
```

Booting the guest OS
--------------------

Make sure your system has hugepages available:

```
$ cat /proc/sys/vm/nr_hugepages
1024
```

Now you should be able to start qemu:

```
$ imgpath=/path/to/bionic-server-cloudimg-amd64.img
$ sudo ~/src/build/qemu-system-x86_64 \
-machine accel=kvm,type=q35 -cpu host -m 2G \
-mem-prealloc -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/gpio,share=yes,size=2G \
-nographic \
-device virtio-net-pci,netdev=net0 \
-netdev user,id=net0,hostfwd=tcp::2222-:22 \
-drive if=virtio,format=qcow2,file=$imgpath \
-drive if=virtio,format=raw,file=seed.img \
-device vfio-user-pci,socket=/tmp/vfio-user.sock
```

Log in to your VM and load the kernel driver:

```
$ ssh -p 2222 ubuntu@localhost
...
$ sudo apt install linux-modules-extra-$(uname -r)
$ sudo modprobe gpio-pci-idio-16
```

Now we should be able to observe the emulated GPIO device's pins:

```
$ sudo su -
# cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export
# for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done
```

and the server should output something like:

```
gpio: region2: read 0 from (0:1)
gpio: region2: read 0 from (0:1)
gpio: region2: read 0 from (0:1)
gpio: region2: read 0x1 from (0:1)
gpio: region2: read 0x1 from (0:1)
gpio: region2: read 0x1 from (0:1)
gpio: region2: read 0x2 from (0:1)
gpio: region2: read 0x2 from (0:1)
gpio: region2: read 0x2 from (0:1)
gpio: region2: read 0x3 from (0:1)
gpio: region2: read 0x3 from (0:1)
gpio: region2: read 0x3 from (0:1)
```

0 comments on commit bedaf99

Please sign in to comment.