No description
Find a file
Abhishek Bhardwaj 7a1a6cd58b crosvm: Implement communication logic in virtio-vhost-user PCI device
This change adds the PCI device that will act as the conduit between
vhost vmm and vhost device in a virtio-vhost-user
specification. It only implements the communication logic i.e. rx / tx
from the vmm socket and virtio queues associated with the device.

BUG=b:194136484
TEST=Compile.

Change-Id: Ib47045b7633b77b73ed7bd428ca981caa6645275
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/3146213
Auto-Submit: Abhishek Bhardwaj <abhishekbh@chromium.org>
Tested-by: kokoro <noreply+kokoro@google.com>
Commit-Queue: Abhishek Bhardwaj <abhishekbh@chromium.org>
Reviewed-by: Keiichi Watanabe <keiichiw@chromium.org>
Reviewed-by: Chirantan Ekbote <chirantan@chromium.org>
2021-09-17 22:07:55 +00:00
aarch64 aarch64: convert to ThisError 2021-08-26 22:28:29 +00:00
acpi_tables crosvm: fix needless_borrow clippy warning 2021-08-25 23:02:23 +00:00
arch crosvm/pci: update to PCI/INTx allocation 2021-09-02 10:00:47 +00:00
assertions edition: Remove extern crate lines 2019-04-15 02:06:08 -07:00
audio_streams audio_streams: convert to ThisError and sort 2021-08-26 22:28:32 +00:00
base base: convert to ThisError and sort 2021-08-26 22:28:32 +00:00
bin hypervisor: kvm: replace mem::transmute with safe loops 2021-08-12 19:27:06 +00:00
bit_field crosvm: fix needless_borrow clippy warning 2021-08-25 23:02:23 +00:00
ci Fix ./ci/kokoro/simulate 2021-09-15 17:06:48 +00:00
common Add newlines to end of Cargo.toml files. 2021-08-17 20:20:41 +00:00
cros_async cros_async: Fix underflow in BlockingPool worker threads 2021-09-03 12:58:17 +00:00
crosvm_plugin Fix clippy warnings and Cargo.lock 2021-07-15 03:33:17 +00:00
data_model data_model: convert to ThisError and sort 2021-09-02 20:59:14 +00:00
devices crosvm: Implement communication logic in virtio-vhost-user PCI device 2021-09-17 22:07:55 +00:00
disk disk: limit maximum nesting depth 2021-09-17 02:55:04 +00:00
docs book: add features chapter 2021-07-01 16:28:33 +00:00
enumn crosvm: Fix clippy::needless_doctest_main 2020-07-21 13:18:10 +00:00
fuse fuse: convert to ThisError and sort 2021-09-02 21:00:21 +00:00
fuzz disk: limit maximum nesting depth 2021-09-17 02:55:04 +00:00
gpu_display gpu_display: convert to ThisError and sort 2021-09-02 21:00:22 +00:00
hypervisor kvm: Explicitly provide an argument to KVM_CREATE_VM 2021-09-03 18:39:23 +00:00
integration_tests integration_test: Check file system before running test. 2021-08-24 21:56:52 +00:00
io_uring io_uring: convert to ThisError and sort 2021-09-02 21:00:22 +00:00
kernel_cmdline kernel_cmdline: convert to ThisError and sort 2021-09-02 21:00:23 +00:00
kernel_loader sys_util: remove unsafe struct_util functions 2021-08-27 23:48:24 +00:00
kvm kvm: Explicitly provide an argument to KVM_CREATE_VM 2021-09-03 18:39:23 +00:00
kvm_sys kvm_sys: Update aarch64 bindings.rs 2021-05-22 19:23:01 +00:00
libcras_stub Integrate audio_streams into crosvm, add stub libcras implementation 2021-07-29 05:59:42 +00:00
libcrosvm_control Add FFI library providing control socket access 2021-04-08 00:20:01 +00:00
libvda virtio: video: Add support for dynamically changing the peak bitrate. 2021-08-03 00:48:45 +00:00
linux_input_sys gpu_display/wayland: Added keyboard and pointing devices 2021-06-15 03:14:07 +00:00
net_sys Add "base" crate and transition crosvm usages to it from sys_util 2020-08-06 18:19:44 +00:00
net_util crosvm: fix needless_borrow clippy warning 2021-08-25 23:02:23 +00:00
power_monitor power_monitor: Upgrade dbus to 0.9 2021-09-01 18:21:42 +00:00
protos Remove trunks proto from crosvm build 2021-07-31 03:01:21 +00:00
qcow_utils disk: limit maximum nesting depth 2021-09-17 02:55:04 +00:00
resources devices:pci: Pass pci bus number into allocate_pci() 2021-08-24 00:56:31 +00:00
rutabaga_gfx Move virglrenderer/minigbm build into build.rs 2021-09-09 23:13:24 +00:00
seccomp Add virtio-snd device with CRAS backend 2021-09-02 04:29:55 +00:00
src crosvm: Implement communication logic in virtio-vhost-user PCI device 2021-09-17 22:07:55 +00:00
sync Revert "sync: Add wait_while variants to condvar wrapper" 2021-06-30 04:23:47 +00:00
sys_util crosvm: Implement communication logic in virtio-vhost-user PCI device 2021-09-17 22:07:55 +00:00
system_api_stub system_api_stub: use 2018 edition of Rust 2021-09-09 06:42:46 +00:00
tempfile tempfile: add tempfile() and NamedTempFile 2020-08-27 00:39:02 +00:00
tests tests/plugins: replace rand_ish use with a counter 2021-07-21 23:28:27 +00:00
third_party Move virglrenderer/minigbm build into build.rs 2021-09-09 23:13:24 +00:00
tpm2 crosvm: add license blurb to all files 2019-04-24 15:51:38 -07:00
tpm2-sys Upgrade ci containers to bullseye and clean up 2021-09-15 17:06:46 +00:00
usb_sys Add "base" crate and transition crosvm usages to it from sys_util 2020-08-06 18:19:44 +00:00
usb_util usb_util: validate bLength in next_descriptor 2021-09-15 00:16:57 +00:00
vfio_sys devices: vfio: add support for VFIO_REGION_INFO_CAP_MSIX_MAPPABLE 2021-08-13 23:24:01 +00:00
vhost vhost: Don't require GuestMemory in ::new() 2021-09-17 12:21:30 +00:00
virtio_sys base: First steps towards universal RawDescriptor 2020-10-31 07:12:34 +00:00
vm_control crosvm: fix needless_borrow clippy warning 2021-08-25 23:02:23 +00:00
vm_memory vm_memory: implement get_host_address_range() for GuestMemory 2021-06-29 05:16:54 +00:00
x86_64 x86_64: pass kernel command line to bios 2021-09-03 17:11:39 +00:00
.dockerignore add docker supported builds and tests 2019-05-15 13:36:19 -07:00
.gitignore Add .vscode to .gitignore 2021-04-21 06:55:47 +00:00
.gitmodules Switch to submodules based workflow 2021-08-05 18:32:32 +00:00
.rustfmt.toml rustfmt.toml: Use 2018 edition 2021-02-10 11:54:06 +00:00
Cargo.lock Upgrade ci containers to bullseye and clean up 2021-09-15 17:06:46 +00:00
Cargo.toml Remove vhost_user_devices crate 2021-09-07 13:11:53 +00:00
CONTRIBUTING.md Add kokoro information to CONTRIBUTING.md 2021-04-30 21:56:56 +00:00
LICENSE add LICENSE and README 2017-04-17 14:06:21 -07:00
navbar.md docs: Add note about rust-vmm integration 2020-10-01 20:43:41 +00:00
OWNERS OWNERS: Remove zachr, change denniskempin to google.com 2021-06-28 22:33:11 +00:00
README.md Move virglrenderer/minigbm build into build.rs 2021-09-09 23:13:24 +00:00
run_tests Move virglrenderer/minigbm build into build.rs 2021-09-09 23:13:24 +00:00
rust-toolchain native and aarch64 cross-compile containers 2021-01-20 17:41:27 +00:00
setup_cros_cargo.sh fs: Support setting quota project ID 2021-09-03 00:47:25 +00:00
test_all Move virglrenderer/minigbm build into build.rs 2021-09-09 23:13:24 +00:00
unblocked_terms.txt unblocked_terms.txt: clean up trivial cases 2021-04-26 20:32:38 +00:00

crosvm - The Chrome OS Virtual Machine Monitor

This component, known as crosvm, runs untrusted operating systems along with virtualized devices. This only runs VMs through the Linux's KVM interface. What makes crosvm unique is a focus on safety within the programming language and a sandbox around the virtual devices to protect the kernel from attack in case of an exploit in the devices.

[TOC]

Building for Linux

Crosvm uses submodules to manage external dependencies. Initialize them via:

git submodule update --init

It is recommended to enable automatic recursive operations to keep the submodules in sync with the main repository (But do not push them, as that can conflict with repo):

git config --global submodule.recurse true
git config push.recurseSubmodules no

Crosvm requires a couple of dependencies. For Debian derivatives these can be installed by (Depending on which feature flags are used, not all of these will actually be required):

sudo apt install \
    bindgen \
    build-essential \
    clang \
    libasound2-dev \
    libcap-dev \
    libdbus-1-dev \
    libdrm-dev \
    libepoxy-dev \
    libssl-dev \
    libwayland-bin \
    libwayland-dev \
    pkg-config \
    protobuf-compiler \
    python3 \
    wayland-protocols

And that's it! You should be able to cargo build/run/test.

Known issues

  • By default, crosvm is running devices in sandboxed mode, which requires seccomp policy files to be set up. For local testing it is often easier to --disable-sandbox to run everything in a single process.
  • If your Linux header files are too old, you may find minijail rejecting seccomp filters for containing unknown syscalls. You can try removing the offending lines from the filter file, or add --seccomp-log-failures to the crosvm command line to turn these into warnings. Note that this option will also stop minijail from killing processes that violate the seccomp rule, making the sandboxing much less aggressive.
  • Seccomp policy files have hardcoded absolute paths. You can either fix up the paths locally, or set up an awesome hacky symlink: sudo mkdir /usr/share/policy && sudo ln -s /path/to/crosvm/seccomp/x86_64 /usr/share/policy/crosvm. We'll eventually build the precompiled policies into the crosvm binary.
  • Devices can't be jailed if /var/empty doesn't exist. sudo mkdir -p /var/empty to work around this for now.
  • You need read/write permissions for /dev/kvm to run tests or other crosvm instances. Usually it's owned by the kvm group, so sudo usermod -a -G kvm $USER and then log out and back in again to fix this.
  • Some other features (networking) require CAP_NET_ADMIN so those usually need to be run as root.

Running crosvm tests on Linux

Installing Podman (or Docker)

See Podman Installation for instructions on how to install podman.

For Googlers, see go/dont-install-docker for special instructions on how to set up podman.

If you already have docker installed, that will do as well. However podman is recommended as it will not run containers with root privileges.

Running all tests

To run all tests for all platforms, just run:

./test_all

This will run all tests using the x86 and aarch64 builder containers. What does this do?

  1. It will start ./ci/[aarch64_]builder --vm.

    This will start the builder container and launch a VM for running tests in the background. The VM is booting while the next step is running.

  2. It will call ./run_tests inside the builder

    The script will pick which tests to execute and where. Simple tests can be executed directly, other tests require privileged access to devices and will be loaded into the VM to execute.

    Each test will in the end be executed by a call to cargo test -p crate_name.

Intermediate build data is stored in a scratch directory at ./target/ci/ to allow for faster subsequent calls (Note: If running with docker, these files will be owned by root).

Fast, iterative test runs

For faster iteration time, you can directly invoke some of these steps directly:

To only run x86 tests: ./ci/[aarch64_]builder --vm ./run_tests.

To run a simple test (e.g. the tempfile crate) that does not need the vm: ./ci/[aarch64_]builder cargo test -p tempfile.

Or run a single test (e.g. kvm_sys) inside the vm: ./ci/[aarch64*]builder --vm cargo test -p kvm_sys.

Since the VM (especially the fully emulated aarch64 VM) can be slow to boot, you can start an interactive shell and run commands from there as usual. All cargo tests will be executed inside the VM, without the need to restart the VM between test calls.

host$ ./ci/aarch64_builder --vm
crosvm-aarch64$ ./run_tests
crosvm-aarch64$ cargo test -p kvm_sys
...

Running tests without Docker

Specific crates can be tested as usual with cargo test without the need for Docker. However, because of special requirements some of them will not work, which means that cargo test --workspace will also not work to run all tests.

For this reason, we have a separate test runner ./run_tests which documents the requirements of each crate and picks the tests to run. It is used by the Docker container to run tests, but can also be run outside of the container to run a subset of tests.

See ./run_tests --help for more information.

Reproducing Kokoro locally

Kokoro runs presubmit tests on all crosvm changes. It uses the same builders and the same run_tests script to run tests. This should match the results of the ./test_all script, but if it does not, the kokoro build scripts can be simulated locally using: ./ci/kokoro/simulate_all.

Building for ChromeOS

crosvm is included in the ChromeOS source tree at src/platform/crosvm. Crosvm can be built with ChromeOS features using Portage or cargo.

If ChromeOS-specific features are not needed, or you want to run the full test suite of crosvm, the Building for Linux and Running crosvm tests workflows can be used from the crosvm repository of ChromeOS as well.

Using Portage

crosvm on ChromeOS is usually built with Portage, so it follows the same general workflow as any cros_workon package. The full package name is chromeos-base/crosvm.

See the Chromium OS developer guide for more on how to build and deploy with Portage.

NOTE: cros_workon_make modifies crosvm's Cargo.toml and Cargo.lock. Please be careful not to commit the changes. Moreover, with the changes cargo will fail to build and clippy preupload check will fail.

Using Cargo

Since development using portage can be slow, it's possible to build crosvm for ChromeOS using cargo for faster iteration times. To do so, the Cargo.toml file needs to be updated to point to dependencies provided by ChromeOS using ./setup_cros_cargo.sh.

Usage

To see the usage information for your version of crosvm, run crosvm or crosvm run --help.

Boot a Kernel

To run a very basic VM with just a kernel and default devices:

$ crosvm run "${KERNEL_PATH}"

The uncompressed kernel image, also known as vmlinux, can be found in your kernel build directory in the case of x86 at arch/x86/boot/compressed/vmlinux.

Rootfs

With a disk image

In most cases, you will want to give the VM a virtual block device to use as a root file system:

$ crosvm run -r "${ROOT_IMAGE}" "${KERNEL_PATH}"

The root image must be a path to a disk image formatted in a way that the kernel can read. Typically this is a squashfs image made with mksquashfs or an ext4 image made with mkfs.ext4. By using the -r argument, the kernel is automatically told to use that image as the root, and therefore can only be given once. More disks can be given with -d or --rwdisk if a writable disk is desired.

To run crosvm with a writable rootfs:

WARNING: Writable disks are at risk of corruption by a malicious or malfunctioning guest OS.

crosvm run --rwdisk "${ROOT_IMAGE}" -p "root=/dev/vda" vmlinux

NOTE: If more disks arguments are added prior to the desired rootfs image, the root=/dev/vda must be adjusted to the appropriate letter.

With virtiofs

Linux kernel 5.4+ is required for using virtiofs. This is convenient for testing. The file system must be named "mtd*" or "ubi*".

crosvm run --shared-dir "/:mtdfake:type=fs:cache=always" \
    -p "rootfstype=virtiofs root=mtdfake" vmlinux

Control Socket

If the control socket was enabled with -s, the main process can be controlled while crosvm is running. To tell crosvm to stop and exit, for example:

NOTE: If the socket path given is for a directory, a socket name underneath that path will be generated based on crosvm's PID.

$ crosvm run -s /run/crosvm.sock ${USUAL_CROSVM_ARGS}
    <in another shell>
$ crosvm stop /run/crosvm.sock

WARNING: The guest OS will not be notified or gracefully shutdown.

This will cause the original crosvm process to exit in an orderly fashion, allowing it to clean up any OS resources that might have stuck around if crosvm were terminated early.

Multiprocess Mode

By default crosvm runs in multiprocess mode. Each device that supports running inside of a sandbox will run in a jailed child process of crosvm. The appropriate minijail seccomp policy files must be present either in /usr/share/policy/crosvm or in the path specified by the --seccomp-policy-dir argument. The sandbox can be disabled for testing with the --disable-sandbox option.

Virtio Wayland

Virtio Wayland support requires special support on the part of the guest and as such is unlikely to work out of the box unless you are using a Chrome OS kernel along with a termina rootfs.

To use it, ensure that the XDG_RUNTIME_DIR enviroment variable is set and that the path $XDG_RUNTIME_DIR/wayland-0 points to the socket of the Wayland compositor you would like the guest to use.

GDB Support

crosvm supports GDB Remote Serial Protocol to allow developers to debug guest kernel via GDB.

You can enable the feature by --gdb flag:

# Use uncompressed vmlinux
$ crosvm run --gdb <port> ${USUAL_CROSVM_ARGS} vmlinux

Then, you can start GDB in another shell.

$ gdb vmlinux
(gdb) target remote :<port>
(gdb) hbreak start_kernel
(gdb) c
<start booting in the other shell>

For general techniques for debugging the Linux kernel via GDB, see this kernel documentation.

Defaults

The following are crosvm's default arguments and how to override them.

  • 256MB of memory (set with -m)
  • 1 virtual CPU (set with -c)
  • no block devices (set with -r, -d, or --rwdisk)
  • no network (set with --host_ip, --netmask, and --mac)
  • virtio wayland support if XDG_RUNTIME_DIR enviroment variable is set (disable with --no-wl)
  • only the kernel arguments necessary to run with the supported devices (add more with -p)
  • run in multiprocess mode (run in single process mode with --disable-sandbox)
  • no control socket (set with -s)

System Requirements

A Linux kernel with KVM support (check for /dev/kvm) is required to run crosvm. In order to run certain devices, there are additional system requirements:

  • virtio-wayland - The memfd_create syscall, introduced in Linux 3.17, and a Wayland compositor.
  • vsock - Host Linux kernel with vhost-vsock support, introduced in Linux 4.8.
  • multiprocess - Host Linux kernel with seccomp-bpf and Linux namespacing support.
  • virtio-net - Host Linux kernel with TUN/TAP support (check for /dev/net/tun) and running with CAP_NET_ADMIN privileges.

Emulated Devices

Device Description
CMOS/RTC Used to get the current calendar time.
i8042 Used by the guest kernel to exit crosvm.
serial x86 I/O port driven serial devices that print to stdout and take input from stdin.
virtio-block Basic read/write block device.
virtio-net Device to interface the host and guest networks.
virtio-rng Entropy source used to seed guest OS's entropy pool.
virtio-vsock Enabled VSOCKs for the guests.
virtio-wayland Allowed guest to use host Wayland socket.

Contributing

Code Health

rustfmt

All code should be formatted with rustfmt. We have a script that applies rustfmt to all Rust code in the crosvm repo: please run bin/fmt before checking in a change. This is different from cargo fmt --all which formats multiple crates but a single workspace only; crosvm consists of multiple workspaces.

clippy

The clippy linter is used to check for common Rust problems. The crosvm project uses a specific set of clippy checks; please run bin/clippy before checking in a change.

Dependencies

ChromeOS and Android both have a review process for third party dependencies to ensure that code included in the product is safe. Since crosvm needs to build on both, this means we are restricted in our usage of third party crates. When in doubt, do not add new dependencies.

Code Overview

The crosvm source code is written in Rust and C. To build, crosvm generally requires the most recent stable version of rustc.

Source code is organized into crates, each with their own unit tests. These crates are:

  • crosvm - The top-level binary front-end for using crosvm.
  • devices - Virtual devices exposed to the guest OS.
  • kernel_loader - Loads elf64 kernel files to a slice of memory.
  • kvm_sys - Low-level (mostly) auto-generated structures and constants for using KVM.
  • kvm - Unsafe, low-level wrapper code for using kvm_sys.
  • net_sys - Low-level (mostly) auto-generated structures and constants for creating TUN/TAP devices.
  • net_util - Wrapper for creating TUN/TAP devices.
  • sys_util - Mostly safe wrappers for small system facilities such as eventfd or syslog.
  • syscall_defines - Lists of syscall numbers in each architecture used to make syscalls not supported in libc.
  • vhost - Wrappers for creating vhost based devices.
  • virtio_sys - Low-level (mostly) auto-generated structures and constants for interfacing with kernel vhost support.
  • vm_control - IPC for the VM.
  • x86_64 - Support code specific to 64 bit intel machines.

The seccomp folder contains minijail seccomp policy files for each sandboxed device. Because some syscalls vary by architecture, the seccomp policies are split by architecture.