mirror of
https://github.com/zed-industries/zed.git
synced 2025-01-13 05:42:59 +00:00
4431ef1870
Some checks are pending
CI / Check Postgres and Protobuf migrations, mergability (push) Waiting to run
CI / Check formatting and spelling (push) Waiting to run
CI / (macOS) Run Clippy and tests (push) Waiting to run
CI / (Linux) Run Clippy and tests (push) Waiting to run
CI / (Linux) Build Remote Server (push) Waiting to run
CI / (Windows) Run Clippy and tests (push) Waiting to run
CI / Create a macOS bundle (push) Blocked by required conditions
CI / Create a Linux bundle (push) Blocked by required conditions
CI / Create arm64 Linux bundle (push) Blocked by required conditions
Deploy Docs / Deploy Docs (push) Waiting to run
Docs / Check formatting (push) Waiting to run
This pull request introduces an index of Unicode codepoints, newlines and UTF-16 codepoints. Benchmarks worth a thousand words: ``` push/4096 time: [467.06 µs 470.07 µs 473.24 µs] thrpt: [8.2543 MiB/s 8.3100 MiB/s 8.3635 MiB/s] change: time: [-4.1462% -3.0990% -2.0527%] (p = 0.00 < 0.05) thrpt: [+2.0957% +3.1981% +4.3255%] Performance has improved. Found 3 outliers among 100 measurements (3.00%) 1 (1.00%) low mild 2 (2.00%) high mild push/65536 time: [1.4650 ms 1.4796 ms 1.4922 ms] thrpt: [41.885 MiB/s 42.242 MiB/s 42.664 MiB/s] change: time: [-3.2871% -2.3489% -1.4555%] (p = 0.00 < 0.05) thrpt: [+1.4770% +2.4054% +3.3988%] Performance has improved. Found 6 outliers among 100 measurements (6.00%) 3 (3.00%) low severe 3 (3.00%) low mild append/4096 time: [729.00 ns 730.57 ns 732.14 ns] thrpt: [5.2103 GiB/s 5.2215 GiB/s 5.2327 GiB/s] change: time: [-81.884% -81.836% -81.790%] (p = 0.00 < 0.05) thrpt: [+449.16% +450.53% +452.01%] Performance has improved. Found 11 outliers among 100 measurements (11.00%) 3 (3.00%) low mild 6 (6.00%) high mild 2 (2.00%) high severe append/65536 time: [504.44 ns 505.58 ns 506.77 ns] thrpt: [120.44 GiB/s 120.72 GiB/s 121.00 GiB/s] change: time: [-94.833% -94.807% -94.782%] (p = 0.00 < 0.05) thrpt: [+1816.3% +1825.8% +1835.5%] Performance has improved. Found 4 outliers among 100 measurements (4.00%) 3 (3.00%) high mild 1 (1.00%) high severe slice/4096 time: [29.661 µs 29.733 µs 29.816 µs] thrpt: [131.01 MiB/s 131.38 MiB/s 131.70 MiB/s] change: time: [-48.833% -48.533% -48.230%] (p = 0.00 < 0.05) thrpt: [+93.161% +94.298% +95.440%] Performance has improved. slice/65536 time: [588.00 µs 590.22 µs 592.17 µs] thrpt: [105.54 MiB/s 105.89 MiB/s 106.29 MiB/s] change: time: [-45.599% -45.347% -45.099%] (p = 0.00 < 0.05) thrpt: [+82.147% +82.971% +83.821%] Performance has improved. Found 2 outliers among 100 measurements (2.00%) 1 (1.00%) low severe 1 (1.00%) high mild bytes_in_range/4096 time: [3.8630 µs 3.8811 µs 3.8994 µs] thrpt: [1001.8 MiB/s 1006.5 MiB/s 1011.2 MiB/s] change: time: [+0.0600% +0.6000% +1.1833%] (p = 0.03 < 0.05) thrpt: [-1.1695% -0.5964% -0.0600%] Change within noise threshold. bytes_in_range/65536 time: [98.178 µs 98.545 µs 98.931 µs] thrpt: [631.75 MiB/s 634.23 MiB/s 636.60 MiB/s] change: time: [-0.6513% +0.7537% +2.2265%] (p = 0.30 > 0.05) thrpt: [-2.1780% -0.7481% +0.6555%] No change in performance detected. Found 11 outliers among 100 measurements (11.00%) 8 (8.00%) high mild 3 (3.00%) high severe chars/4096 time: [878.91 ns 879.45 ns 880.06 ns] thrpt: [4.3346 GiB/s 4.3376 GiB/s 4.3403 GiB/s] change: time: [+9.1679% +9.4000% +9.6304%] (p = 0.00 < 0.05) thrpt: [-8.7844% -8.5923% -8.3979%] Performance has regressed. Found 8 outliers among 100 measurements (8.00%) 1 (1.00%) low severe 1 (1.00%) low mild 3 (3.00%) high mild 3 (3.00%) high severe chars/65536 time: [15.615 µs 15.691 µs 15.757 µs] thrpt: [3.8735 GiB/s 3.8899 GiB/s 3.9087 GiB/s] change: time: [+5.4902% +5.9345% +6.4044%] (p = 0.00 < 0.05) thrpt: [-6.0190% -5.6021% -5.2045%] Performance has regressed. Found 2 outliers among 100 measurements (2.00%) 2 (2.00%) low mild clip_point/4096 time: [29.677 µs 29.835 µs 30.019 µs] thrpt: [130.13 MiB/s 130.93 MiB/s 131.63 MiB/s] change: time: [-46.306% -45.866% -45.436%] (p = 0.00 < 0.05) thrpt: [+83.272% +84.728% +86.240%] Performance has improved. Found 11 outliers among 100 measurements (11.00%) 3 (3.00%) high mild 8 (8.00%) high severe clip_point/65536 time: [1.5933 ms 1.6116 ms 1.6311 ms] thrpt: [38.318 MiB/s 38.782 MiB/s 39.226 MiB/s] change: time: [-30.388% -29.598% -28.717%] (p = 0.00 < 0.05) thrpt: [+40.286% +42.040% +43.653%] Performance has improved. Found 3 outliers among 100 measurements (3.00%) 3 (3.00%) high mild running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 7 filtered out; finished in 0.00s point_to_offset/4096 time: [14.493 µs 14.591 µs 14.707 µs] thrpt: [265.61 MiB/s 267.72 MiB/s 269.52 MiB/s] change: time: [-71.990% -71.787% -71.588%] (p = 0.00 < 0.05) thrpt: [+251.96% +254.45% +257.01%] Performance has improved. Found 9 outliers among 100 measurements (9.00%) 5 (5.00%) high mild 4 (4.00%) high severe point_to_offset/65536 time: [700.72 µs 713.75 µs 727.26 µs] thrpt: [85.939 MiB/s 87.566 MiB/s 89.194 MiB/s] change: time: [-61.778% -61.015% -60.256%] (p = 0.00 < 0.05) thrpt: [+151.61% +156.51% +161.63%] Performance has improved. ``` Calling `Rope::chars` got slightly slower but I don't think it's a big issue (we don't really call `chars` for an entire `Rope`). In a future pull request, I want to use the tab index (which we're not yet using) and the char index to make `TabMap` a lot faster. Release Notes: - N/A
196 lines
6.1 KiB
Rust
196 lines
6.1 KiB
Rust
use std::ops::Range;
|
|
|
|
use criterion::{
|
|
black_box, criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion, Throughput,
|
|
};
|
|
use rand::prelude::*;
|
|
use rand::rngs::StdRng;
|
|
use rope::{Point, Rope};
|
|
use sum_tree::Bias;
|
|
use util::RandomCharIter;
|
|
|
|
fn generate_random_text(mut rng: StdRng, text_len: usize) -> String {
|
|
RandomCharIter::new(&mut rng).take(text_len).collect()
|
|
}
|
|
|
|
fn generate_random_rope(rng: StdRng, text_len: usize) -> Rope {
|
|
let text = generate_random_text(rng, text_len);
|
|
let mut rope = Rope::new();
|
|
rope.push(&text);
|
|
rope
|
|
}
|
|
|
|
fn generate_random_rope_ranges(mut rng: StdRng, rope: &Rope) -> Vec<Range<usize>> {
|
|
let range_max_len = 50;
|
|
let num_ranges = rope.len() / range_max_len;
|
|
|
|
let mut ranges = Vec::new();
|
|
let mut start = 0;
|
|
for _ in 0..num_ranges {
|
|
let range_start = rope.clip_offset(
|
|
rng.gen_range(start..=(start + range_max_len)),
|
|
sum_tree::Bias::Left,
|
|
);
|
|
let range_end = rope.clip_offset(
|
|
rng.gen_range(range_start..(range_start + range_max_len)),
|
|
sum_tree::Bias::Right,
|
|
);
|
|
|
|
let range = range_start..range_end;
|
|
if !range.is_empty() {
|
|
ranges.push(range);
|
|
}
|
|
|
|
start = range_end + 1;
|
|
}
|
|
|
|
ranges
|
|
}
|
|
|
|
fn generate_random_rope_points(mut rng: StdRng, rope: &Rope) -> Vec<Point> {
|
|
let num_points = rope.len() / 10;
|
|
|
|
let mut points = Vec::new();
|
|
for _ in 0..num_points {
|
|
points.push(rope.offset_to_point(rng.gen_range(0..rope.len())));
|
|
}
|
|
points
|
|
}
|
|
|
|
fn rope_benchmarks(c: &mut Criterion) {
|
|
static SEED: u64 = 9999;
|
|
static KB: usize = 1024;
|
|
|
|
let rng = StdRng::seed_from_u64(SEED);
|
|
let sizes = [4 * KB, 64 * KB];
|
|
|
|
let mut group = c.benchmark_group("push");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let text = generate_random_text(rng.clone(), *size);
|
|
|
|
b.iter(|| {
|
|
let mut rope = Rope::new();
|
|
for _ in 0..10 {
|
|
rope.push(&text);
|
|
}
|
|
});
|
|
});
|
|
}
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("append");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let mut random_ropes = Vec::new();
|
|
for _ in 0..5 {
|
|
random_ropes.push(generate_random_rope(rng.clone(), *size));
|
|
}
|
|
|
|
b.iter(|| {
|
|
let mut rope_b = Rope::new();
|
|
for rope in &random_ropes {
|
|
rope_b.append(rope.clone())
|
|
}
|
|
});
|
|
});
|
|
}
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("slice");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let rope = generate_random_rope(rng.clone(), *size);
|
|
|
|
b.iter_batched(
|
|
|| generate_random_rope_ranges(rng.clone(), &rope),
|
|
|ranges| {
|
|
for range in ranges.iter() {
|
|
rope.slice(range.clone());
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
);
|
|
});
|
|
}
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("bytes_in_range");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let rope = generate_random_rope(rng.clone(), *size);
|
|
|
|
b.iter_batched(
|
|
|| generate_random_rope_ranges(rng.clone(), &rope),
|
|
|ranges| {
|
|
for range in ranges.iter() {
|
|
let bytes = rope.bytes_in_range(range.clone());
|
|
assert!(bytes.into_iter().count() > 0);
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
);
|
|
});
|
|
}
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("chars");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let rope = generate_random_rope(rng.clone(), *size);
|
|
|
|
b.iter_with_large_drop(|| {
|
|
let chars = rope.chars().count();
|
|
assert!(chars > 0);
|
|
});
|
|
});
|
|
}
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("clip_point");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let rope = generate_random_rope(rng.clone(), *size);
|
|
|
|
b.iter_batched(
|
|
|| generate_random_rope_points(rng.clone(), &rope),
|
|
|offsets| {
|
|
for offset in offsets.iter() {
|
|
black_box(rope.clip_point(*offset, Bias::Left));
|
|
black_box(rope.clip_point(*offset, Bias::Right));
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
);
|
|
});
|
|
}
|
|
group.finish();
|
|
|
|
let mut group = c.benchmark_group("point_to_offset");
|
|
for size in sizes.iter() {
|
|
group.throughput(Throughput::Bytes(*size as u64));
|
|
group.bench_with_input(BenchmarkId::from_parameter(size), &size, |b, &size| {
|
|
let rope = generate_random_rope(rng.clone(), *size);
|
|
|
|
b.iter_batched(
|
|
|| generate_random_rope_points(rng.clone(), &rope),
|
|
|offsets| {
|
|
for offset in offsets.iter() {
|
|
black_box(rope.point_to_offset(*offset));
|
|
}
|
|
},
|
|
BatchSize::SmallInput,
|
|
);
|
|
});
|
|
}
|
|
group.finish();
|
|
}
|
|
|
|
criterion_group!(benches, rope_benchmarks);
|
|
criterion_main!(benches);
|