devices: virtio: Always inject interrupts if hypervisor can handle

Gunyah hypervisor supports virtio-mmio transport only in some
products on which CrosVM is also used. Qualcomm has developed
a wrapper program which uses CrosVM's libdevices crate.

Gunyah hypervisor has some optimizations for how virtio-mmio
register access are handled. Normally they are synchronous access -
meaning any access of virtio-mmio register by a guest will cause
a fault, which requires blocking the guest VCPU until the access
can be served by CrosVM running in a different VM. Since that could
induce long delays for guest (esp since the OS in which CrosVM is
hosted is considered untrusted), Gunyah hypervisor caches all
virtio-mmio registers in its address space.
Any read access is handled without requiring intervention from CrosVM.
Write access is handled asynchronously - i,e a write will cause
hypervisor to updates its copy of the register, unblock the guest vcpu
and simultaneously notify CrosVM about the update.
By the time CrosVM gets to notice the update, guest could have
progressed lot more.

This works reasonably well for many devices and registers.
One problem case is writes to VIRTIO_MMIO_INTERRUPT_ACK, which signals
the guest having seen an interrupt. CrosVM seems to rely on synchronous
handling of this register write. Any attempts to kick guest via signal
method of 'struct Interrupt' (devices/src/virtio/interrupt.rs) skips
sending a kick if prior kick is "not yet" acknowledged, which fails
if writes to VIRTIO_MMIO_INTERRUPT_ACK is asynchronous.

This patch introduces a 'async_intr_status' flag in signal method which allows
the kick to be always delivered to guest independent of the status of
acknowledgement of prior kicks. Hypervisor will queue kick requests and
deliver another kick after current kick is processed.

Its expected that only the Qualcomm wrapper program will initialize the
interrupt object with 'async_intr_status' flag set and hence the upstream
CrosVM should not be affected because of this change.

BUG=b:243368499

Change-Id: I76568d9d8bc3be00c75c52d4a51d39410c5c35b3
Reviewed-on: https://chromium-review.googlesource.com/c/crosvm/crosvm/+/3965686
Reviewed-by: Daniel Verkamp <dverkamp@chromium.org>
This commit is contained in:
Sreenad Menon 2022-09-27 21:39:58 +05:30 committed by crosvm LUCI
parent aad6a554dc
commit 5ad8b5afc5
3 changed files with 16 additions and 4 deletions

View file

@ -50,6 +50,7 @@ enum Transport {
struct InterruptInner {
interrupt_status: AtomicUsize,
transport: Transport,
async_intr_status: bool,
}
#[derive(Clone)]
@ -78,12 +79,15 @@ impl SignalableInterrupt for Interrupt {
// Set bit in ISR and inject the interrupt if it was not already pending.
// Don't need to inject the interrupt if the guest hasn't processed it.
// In hypervisors where interrupt_status is updated asynchronously, inject the
// interrupt even if the previous interrupt appears to be already pending.
if self
.inner
.as_ref()
.interrupt_status
.fetch_or(interrupt_status_mask as usize, Ordering::SeqCst)
== 0
|| self.inner.as_ref().async_intr_status
{
// Write to irqfd to inject PCI INTx or MMIO interrupt
match &self.inner.as_ref().transport {
@ -127,6 +131,7 @@ impl Interrupt {
Interrupt {
inner: Arc::new(InterruptInner {
interrupt_status: AtomicUsize::new(0),
async_intr_status: false,
transport: Transport::Pci {
pci: TransportPci {
irq_evt_lvl,
@ -138,11 +143,12 @@ impl Interrupt {
}
}
pub fn new_mmio(irq_evt_edge: IrqEdgeEvent) -> Interrupt {
pub fn new_mmio(irq_evt_edge: IrqEdgeEvent, async_intr_status: bool) -> Interrupt {
Interrupt {
inner: Arc::new(InterruptInner {
interrupt_status: AtomicUsize::new(0),
transport: Transport::Mmio { irq_evt_edge },
async_intr_status,
}),
}
}

View file

@ -54,6 +54,7 @@ pub struct VirtioMmioDevice {
interrupt: Option<Interrupt>,
interrupt_evt: Option<IrqEdgeEvent>,
async_intr_status: bool,
queues: Vec<Queue>,
queue_evts: Vec<Event>,
mem: GuestMemory,
@ -70,7 +71,11 @@ pub struct VirtioMmioDevice {
impl VirtioMmioDevice {
/// Constructs a new MMIO transport for the given virtio device.
pub fn new(mem: GuestMemory, device: Box<dyn VirtioDevice>) -> Result<Self> {
pub fn new(
mem: GuestMemory,
device: Box<dyn VirtioDevice>,
async_intr_status: bool,
) -> Result<Self> {
let mut queue_evts = Vec::new();
for _ in device.queue_max_sizes() {
queue_evts.push(Event::new()?)
@ -86,6 +91,7 @@ impl VirtioMmioDevice {
device_activated: false,
interrupt: None,
interrupt_evt: None,
async_intr_status,
queues,
queue_evts,
mem,
@ -152,7 +158,7 @@ impl VirtioMmioDevice {
};
let mem = self.mem.clone();
let interrupt = Interrupt::new_mmio(interrupt_evt);
let interrupt = Interrupt::new_mmio(interrupt_evt, self.async_intr_status);
self.interrupt = Some(interrupt.clone());
match self.clone_queue_evts() {

View file

@ -905,7 +905,7 @@ fn create_devices(
devices.push((Box::new(dev) as Box<dyn BusDeviceObj>, stub.jail));
}
VirtioTransportType::Mmio => {
let dev = VirtioMmioDevice::new(vm.get_memory().clone(), stub.dev)
let dev = VirtioMmioDevice::new(vm.get_memory().clone(), stub.dev, false)
.context("failed to create virtio mmio dev")?;
devices.push((Box::new(dev) as Box<dyn BusDeviceObj>, stub.jail));
}