Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vhost_user: Add support for SHMEM_MAP/UNMAP backend requests #251

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions vhost/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
## [Unreleased]

### Added
- [[#251]](https://github.com/rust-vmm/vhost/pull/251) Add `SHMEM_MAP` and `SHMEM_UNMAP` support
- [[#241]](https://github.com/rust-vmm/vhost/pull/241) Add shared objects support
- [[#239]](https://github.com/rust-vmm/vhost/pull/239) Add support for `VHOST_USER_GPU_SET_SOCKET`

Expand Down
73 changes: 66 additions & 7 deletions vhost/src/vhost_user/backend_req.rs
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,16 @@ impl VhostUserFrontendReqHandler for Backend {
Some(&[fd.as_raw_fd()]),
)
}

/// Forward vhost-user memory map file request to the frontend.
fn shmem_map(&self, req: &VhostUserMMap, fd: &dyn AsRawFd) -> HandlerResult<u64> {
self.send_message(BackendReq::SHMEM_MAP, req, Some(&[fd.as_raw_fd()]))
}

/// Forward vhost-user memory unmap file request to the frontend.
fn shmem_unmap(&self, req: &VhostUserMMap) -> HandlerResult<u64> {
self.send_message(BackendReq::SHMEM_UNMAP, req, None)
}
}

#[cfg(test)]
Expand All @@ -191,10 +201,16 @@ mod tests {

use super::*;

fn frontend_backend_pair() -> (Endpoint<VhostUserMsgHeader<BackendReq>>, Backend) {
let (p1, p2) = UnixStream::pair().unwrap();
mtjhrc marked this conversation as resolved.
Show resolved Hide resolved
let backend = Backend::from_stream(p1);
let frontend = Endpoint::<VhostUserMsgHeader<BackendReq>>::from_stream(p2);
(frontend, backend)
}

#[test]
fn test_backend_req_set_failed() {
let (p1, _p2) = UnixStream::pair().unwrap();
let backend = Backend::from_stream(p1);
let (_, backend) = frontend_backend_pair();

assert!(backend.node().error.is_none());
backend.set_failed(libc::EAGAIN);
Expand All @@ -203,8 +219,7 @@ mod tests {

#[test]
fn test_backend_req_send_failure() {
let (p1, _) = UnixStream::pair().unwrap();
let backend = Backend::from_stream(p1);
let (_, backend) = frontend_backend_pair();

backend.set_failed(libc::ECONNRESET);
backend
Expand All @@ -218,9 +233,7 @@ mod tests {

#[test]
fn test_backend_req_recv_negative() {
let (p1, p2) = UnixStream::pair().unwrap();
let backend = Backend::from_stream(p1);
let mut frontend = Endpoint::<VhostUserMsgHeader<BackendReq>>::from_stream(p2);
let (mut frontend, backend) = frontend_backend_pair();

let len = mem::size_of::<VhostUserSharedMsg>();
let mut hdr = VhostUserMsgHeader::new(
Expand Down Expand Up @@ -266,4 +279,50 @@ mod tests {
.shared_object_add(&VhostUserSharedMsg::default())
.unwrap();
}

#[test]
fn test_shmem_map() {
let (mut fronted, backend) = frontend_backend_pair();

let (_, some_fd_to_send) = UnixStream::pair().unwrap();
let map_request = VhostUserMMap {
shmid: 0,
padding: Default::default(),
fd_offset: 0,
shm_offset: 1028,
len: 4096,
flags: (VhostUserMMapFlags::MAP_R | VhostUserMMapFlags::MAP_W).bits(),
};

backend.shmem_map(&map_request, &some_fd_to_send).unwrap();

let (hdr, request, fd) = fronted.recv_body::<VhostUserMMap>().unwrap();
assert_eq!(hdr.get_code().unwrap(), BackendReq::SHMEM_MAP);
assert!(fd.is_some());
assert_eq!({ request.shm_offset }, { map_request.shm_offset });
assert_eq!({ request.len }, { map_request.len },);
assert_eq!({ request.flags }, { map_request.flags });
}

#[test]
fn test_shmem_unmap() {
let (mut frontend, backend) = frontend_backend_pair();

let unmap_request = VhostUserMMap {
shmid: 0,
padding: Default::default(),
fd_offset: 0,
shm_offset: 1028,
len: 4096,
flags: 0,
};

backend.shmem_unmap(&unmap_request).unwrap();

let (hdr, request, fd) = frontend.recv_body::<VhostUserMMap>().unwrap();
assert_eq!(hdr.get_code().unwrap(), BackendReq::SHMEM_UNMAP);
assert!(fd.is_none());
assert_eq!({ request.shm_offset }, { unmap_request.shm_offset });
assert_eq!({ request.len }, { unmap_request.len });
}
}
104 changes: 103 additions & 1 deletion vhost/src/vhost_user/frontend_req_handler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,16 @@ pub trait VhostUserFrontendReqHandler {
Err(std::io::Error::from_raw_os_error(libc::ENOSYS))
}

/// Handle shared memory region mapping requests.
fn shmem_map(&self, _req: &VhostUserMMap, _fd: &dyn AsRawFd) -> HandlerResult<u64> {
Err(std::io::Error::from_raw_os_error(libc::ENOSYS))
}

/// Handle shared memory region unmapping requests.
fn shmem_unmap(&self, _req: &VhostUserMMap) -> HandlerResult<u64> {
Err(std::io::Error::from_raw_os_error(libc::ENOSYS))
}
mtjhrc marked this conversation as resolved.
Show resolved Hide resolved

// fn handle_iotlb_msg(&mut self, iotlb: VhostUserIotlb);
// fn handle_vring_host_notifier(&mut self, area: VhostUserVringArea, fd: &dyn AsRawFd);
}
Expand Down Expand Up @@ -84,6 +94,16 @@ pub trait VhostUserFrontendReqHandlerMut {
Err(std::io::Error::from_raw_os_error(libc::ENOSYS))
}

/// Handle shared memory region mapping requests.
fn shmem_map(&mut self, _req: &VhostUserMMap, _fd: &dyn AsRawFd) -> HandlerResult<u64> {
Err(std::io::Error::from_raw_os_error(libc::ENOSYS))
}

/// Handle shared memory region unmapping requests.
fn shmem_unmap(&mut self, _req: &VhostUserMMap) -> HandlerResult<u64> {
Err(std::io::Error::from_raw_os_error(libc::ENOSYS))
}

// fn handle_iotlb_msg(&mut self, iotlb: VhostUserIotlb);
// fn handle_vring_host_notifier(&mut self, area: VhostUserVringArea, fd: RawFd);
}
Expand Down Expand Up @@ -111,6 +131,14 @@ impl<S: VhostUserFrontendReqHandlerMut> VhostUserFrontendReqHandler for Mutex<S>
) -> HandlerResult<u64> {
self.lock().unwrap().shared_object_lookup(uuid, fd)
}

fn shmem_map(&self, req: &VhostUserMMap, fd: &dyn AsRawFd) -> HandlerResult<u64> {
self.lock().unwrap().shmem_map(req, fd)
}

fn shmem_unmap(&self, req: &VhostUserMMap) -> HandlerResult<u64> {
self.lock().unwrap().shmem_unmap(req)
}
}

/// Server to handle service requests from backends from the backend communication channel.
Expand Down Expand Up @@ -241,6 +269,18 @@ impl<S: VhostUserFrontendReqHandler> FrontendReqHandler<S> {
.shared_object_lookup(&msg, &files.unwrap()[0])
.map_err(Error::ReqHandlerError)
}
Ok(BackendReq::SHMEM_MAP) => {
let msg = self.extract_msg_body::<VhostUserMMap>(&hdr, size, &buf)?;
self.backend
.shmem_map(&msg, &files.unwrap()[0])
.map_err(Error::ReqHandlerError)
}
Ok(BackendReq::SHMEM_UNMAP) => {
let msg = self.extract_msg_body::<VhostUserMMap>(&hdr, size, &buf)?;
self.backend
.shmem_unmap(&msg)
.map_err(Error::ReqHandlerError)
}
_ => Err(Error::InvalidMessage),
};

Expand Down Expand Up @@ -278,7 +318,7 @@ impl<S: VhostUserFrontendReqHandler> FrontendReqHandler<S> {
files: &Option<Vec<File>>,
) -> Result<()> {
match hdr.get_code() {
Ok(BackendReq::SHARED_OBJECT_LOOKUP) => {
Ok(BackendReq::SHARED_OBJECT_LOOKUP | BackendReq::SHMEM_MAP) => {
// Expect a single file is passed.
match files {
Some(files) if files.len() == 1 => Ok(()),
Expand Down Expand Up @@ -366,12 +406,14 @@ mod tests {

struct MockFrontendReqHandler {
shared_objects: HashSet<Uuid>,
shmem_mappings: HashSet<(u64, u64)>,
}

impl MockFrontendReqHandler {
fn new() -> Self {
Self {
shared_objects: HashSet::new(),
shmem_mappings: HashSet::new(),
}
}
}
Expand All @@ -395,6 +437,16 @@ mod tests {
}
Ok(1)
}

fn shmem_map(&mut self, req: &VhostUserMMap, _fd: &dyn AsRawFd) -> HandlerResult<u64> {
assert_eq!({ req.shmid }, 0);
Ok(!self.shmem_mappings.insert((req.shm_offset, req.len)) as u64)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it's better to remove the negation (!) and use a direct check to return meaningful values. e.g if self.shmem_mappings.insert(req.shm_offset, req.len) { Ok(1) } else { Ok(0) } and why the double braces? missing &?

Copy link
Contributor Author

@mtjhrc mtjhrc Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By braces do you mean ( ) or { }? The (req.shm_offset, req.len) is a tuple.

In regards to the negation with cast instead of explicit if, I considered to write it like you did, I mainly wrote it like this, because it is symmetric with the existing shared_object_add, shared_object_remove above:

fn shared_object_add(&mut self, uuid: &VhostUserSharedMsg) -> HandlerResult<u64> {
    Ok(!self.shared_objects.insert(uuid.uuid) as u64)
}
fn shared_object_remove(&mut self, uuid: &VhostUserSharedMsg) -> HandlerResult<u64> {
    Ok(!self.shared_objects.remove(&uuid.uuid) as u64)
}

I am not sure if I should change it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The (req.shm_offset, req.len) is a tuple.

ah right!

because it is symmetric with the existing shared_object_add, shared_object_remove above:

doesn't necessarily mean it's the correct approach.

}

fn shmem_unmap(&mut self, req: &VhostUserMMap) -> HandlerResult<u64> {
assert_eq!({ req.shmid }, 0);
Ok(!self.shmem_mappings.remove(&(req.shm_offset, req.len)) as u64)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

}
}

#[test]
Expand Down Expand Up @@ -436,6 +488,13 @@ mod tests {
assert_eq!(handler.handle_request().unwrap(), 1);
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 1);

// Testing shmem map/unmap messages.
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 1);
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 0);
});

backend.set_shared_object_flag(true);
Expand All @@ -456,6 +515,24 @@ mod tests {
.is_ok());
assert!(backend.shared_object_remove(&shobj_msg).is_ok());
assert!(backend.shared_object_remove(&shobj_msg).is_ok());

let (_, some_fd_to_map) = UnixStream::pair().unwrap();
let map_request1 = VhostUserMMap {
shm_offset: 0,
len: 4096,
..Default::default()
};
let map_request2 = VhostUserMMap {
shm_offset: 4096,
len: 8192,
..Default::default()
};
backend.shmem_map(&map_request1, &some_fd_to_map).unwrap();
backend.shmem_unmap(&map_request2).unwrap();
backend.shmem_map(&map_request2, &some_fd_to_map).unwrap();
backend.shmem_unmap(&map_request2).unwrap();
backend.shmem_unmap(&map_request1).unwrap();

// Ensure that the handler thread did not panic.
assert!(frontend_handler.join().is_ok());
}
Expand Down Expand Up @@ -485,6 +562,13 @@ mod tests {
assert_eq!(handler.handle_request().unwrap(), 1);
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 1);

// Testing shmem map/unmap messages.
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 1);
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 0);
assert_eq!(handler.handle_request().unwrap(), 0);
});

backend.set_reply_ack_flag(true);
Expand All @@ -506,6 +590,24 @@ mod tests {
.is_err());
assert!(backend.shared_object_remove(&shobj_msg).is_ok());
assert!(backend.shared_object_remove(&shobj_msg).is_err());

let (_, some_fd_to_map) = UnixStream::pair().unwrap();
let map_request1 = VhostUserMMap {
shm_offset: 0,
len: 4096,
..Default::default()
};
let map_request2 = VhostUserMMap {
shm_offset: 4096,
len: 8192,
..Default::default()
};
backend.shmem_map(&map_request1, &some_fd_to_map).unwrap();
backend.shmem_unmap(&map_request2).unwrap_err();
backend.shmem_map(&map_request2, &some_fd_to_map).unwrap();
backend.shmem_unmap(&map_request2).unwrap();
backend.shmem_unmap(&map_request1).unwrap();

// Ensure that the handler thread did not panic.
assert!(frontend_handler.join().is_ok());
}
Expand Down
46 changes: 46 additions & 0 deletions vhost/src/vhost_user/message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,10 @@ enum_value! {
SHARED_OBJECT_REMOVE = 7,
/// Lookup for a virtio shared object.
SHARED_OBJECT_LOOKUP = 8,
/// Map memory into guest address space
SHMEM_MAP = 9,
/// Unmap memory from guest address space
SHMEM_UNMAP = 10,
}
}

Expand Down Expand Up @@ -990,6 +994,48 @@ impl VhostUserMsgValidator for VhostUserTransferDeviceState {
}
}

// Bit mask for flags in VhostUserMMap struct
bitflags! {
#[derive(Copy, Clone, Debug, Eq, PartialEq, Default)]
/// Flags specifying access permissions of memory mapping of a file
pub struct VhostUserMMapFlags: u64 {
/// Empty permission.
const EMPTY = 0x0;
/// Read permission.
const MAP_R = 0x1;
/// Write permission.
const MAP_W = 0x2;
}
}

/// Backend request to mmap a file-backed buffer into guest memory
#[repr(C, packed)]
#[derive(Debug, Copy, Clone, Default)]
pub struct VhostUserMMap {
/// Shared memory region ID.
pub shmid: u8,
/// Struct padding.
pub padding: [u8; 7],
/// File offset.
pub fd_offset: u64,
/// Offset into the shared memory region.
pub shm_offset: u64,
/// Size of region to map.
pub len: u64,
/// Flags for the mmap operation
pub flags: u64,
}

// SAFETY: Safe because all fields of VhostUserBackendMapMsg are POD.
unsafe impl ByteValued for VhostUserMMap {}

impl VhostUserMsgValidator for VhostUserMMap {
fn is_valid(&self) -> bool {
(self.flags & !VhostUserMMapFlags::all().bits()) == 0
&& self.fd_offset.checked_add(self.len).is_some()
&& self.shm_offset.checked_add(self.len).is_some()
}
Comment on lines +1034 to +1037

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to add more checks relevant to the struct? e.g a check based on valid values for shmid?

Copy link
Contributor Author

@mtjhrc mtjhrc Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From virtio spec:

A device may have multiple shared memory regions associated with it. Each region has a shmid to identify it, the meaning of which is device-specific.

According to the virtio (not vhost) spec the valid values of shmid are dependent on the device type (And probably the advertised device features...), so I don't think you could check it here generally.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, fair enough for shmid, you can leave it as it is.

}
/// Inflight I/O descriptor state for split virtqueues
#[repr(C, packed)]
#[derive(Clone, Copy, Default)]
Expand Down