crosvm/cros_async
Dylan Reid 503c5abef6 devices: Add an asynchronous block device
This enables the use of basic disk images with async IO. A new
block_async.rs is added which mostly mirrors block, except that all
IO operations are asynchronous allowing for multiple virt queues to be
used.

The old block remains unchanged and is still used for qcow, android
sparse, and composite disks. Those should be converted to as time
allows, but this dual approach will have to do for now so ARCVM disk
performance can be properly evaluated.

fio --ioengine=libaio --randrepeat=1 --direct=1 --gtod_reduce=1
--name=test --filename=test --bs=4k --iodepth=64 --size=4G
--readwrite=randrw --rwmixread=75

desktop with nvme:

before:
READ: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s),
io=3070MiB (3219MB), run=84871-84871msec
WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s),
io=1026MiB (1076MB), run=84871-84871msec
after:
READ: bw=257MiB/s (269MB/s), 257MiB/s-257MiB/s (269MB/s-269MB/s),
io=3070MiB (3219MB), run=11964-11964msec
WRITE: bw=85.8MiB/s (89.9MB/s), 85.8MiB/s-85.8MiB/s (89.9MB/s-89.9MB/s),
io=1026MiB (1076MB), run=11964-11964msec

samus with 5.6 kernel
before:
READ: bw=55.3MiB/s (57.9MB/s), 55.3MiB/s-55.3MiB/s (57.9MB/s-57.9MB/s),
io=768MiB (805MB), run=13890-13890msec
WRITE: bw=18.5MiB/s (19.4MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s),
io=256MiB (269MB), run=13890-13890msec
after:
READ: bw=71.2MiB/s (74.7MB/s), 71.2MiB/s-71.2MiB/s (74.7MB/s-74.7MB/s),
io=3070MiB (3219MB), run=43096-43096msec
WRITE: bw=23.8MiB/s (24.0MB/s), 23.8MiB/s-23.8MiB/s (24.0MB/s-24.0MB/s),
io=1026MiB (1076MB), run=43096-43096msec

kevin with 5.6 kernel
before:
READ: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s),
io=1534MiB (1609MB), run=118963-118963msec
WRITE: bw=4424KiB/s (4530kB/s), 4424KiB/s-4424KiB/s (4530kB/s-4530kB/s),
io=514MiB (539MB), run=118963-118963msec
after:
READ: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s),
io=1534MiB (1609MB), run=119364-119364msec
WRITE: bw=4409KiB/s (4515kB/s), 4409KiB/s-4409KiB/s (4515kB/s-4515kB/s),
io=514MiB (539MB), run=119364-119364msec

eve with nvme and 5.7 kernel
before:
READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s),
io=3070MiB
(3219MB), run=62195-62195msec
WRITE: bw=16.5MiB/s (17.3MB/s), 16.5MiB/s-16.5MiB/s (17.3MB/s-17.3MB/s),
io=1026MiB
 (1076MB), run=62195-62195msec
after
READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s),
io=3070MiB (3219MB), run=24593-24593msec
WRITE: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s
(43.7MB/s-43.7MB/s), io=1026MiB (1076MB), run=24593-24593msec

rammus with 5.10 kernel
before:
READ: bw=6927KiB/s (7093kB/s), 6927KiB/s-6927KiB/s (7093kB/s-7093kB/s),
io=3070MiB (3219MB), run=453822-453822msec
WRITE: bw=2315KiB/s (2371kB/s), 2315KiB/s-2315KiB/s (2371kB/s-2371kB/s),
io=1026MiB (1076MB), run=453822-453822msec
after:
Run status group 0 (all jobs):
READ: bw=10.0MiB/s (11.5MB/s), 10.0MiB/s-10.0MiB/s (11.5MB/s-11.5MB/s),
io=3070MiB (3219MB), run=279111-279111msec
WRITE: bw=3764KiB/s (3855kB/s), 3764KiB/s-3764KiB/s (3855kB/s-3855kB/s),
io=1026MiB (1076MB), run=279111-279111msec

BUG=chromium:901139
TEST=unitests
TEST=boot a test image and run fio tests from the guest to measure speed.
TEST=start ARCVM
TEST=tast run $DUT crostini.ResizeOk.dlc_stretch_stable

Change-Id: Idb63628871d0352bd18501a69d9c1c887c37607b
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/2306786
Tested-by: Keiichi Watanabe <keiichiw@chromium.org>
Tested-by: kokoro <noreply+kokoro@google.com>
Reviewed-by: Dylan Reid <dgreid@chromium.org>
Reviewed-by: Chirantan Ekbote <chirantan@chromium.org>
Reviewed-by: Daniel Verkamp <dverkamp@chromium.org>
Commit-Queue: Keiichi Watanabe <keiichiw@chromium.org>
2021-02-17 04:11:55 +00:00
..
src devices: Add an asynchronous block device 2021-02-17 04:11:55 +00:00
.build_test_skip build_test: misc options, improvements, amd bug fixes 2020-10-15 13:40:32 +00:00
Cargo.toml cros_async: Refactor executors 2021-01-25 08:52:27 +00:00