2022-11-08 12:35:16 +00:00
|
|
|
[package]
|
|
|
|
name = "testutils"
|
2023-06-28 14:12:40 +00:00
|
|
|
description = "Integration test utils for the jj-lib crate"
|
2022-11-08 12:35:16 +00:00
|
|
|
publish = false
|
|
|
|
|
2023-08-22 16:41:42 +00:00
|
|
|
version = { workspace = true }
|
|
|
|
edition = { workspace = true }
|
|
|
|
rust-version = { workspace = true }
|
|
|
|
license = { workspace = true }
|
|
|
|
homepage = { workspace = true }
|
|
|
|
repository = { workspace = true }
|
|
|
|
documentation = { workspace = true }
|
|
|
|
readme = { workspace = true }
|
2023-08-05 22:14:11 +00:00
|
|
|
|
2022-11-08 12:35:16 +00:00
|
|
|
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
|
|
|
|
|
|
|
[dependencies]
|
backend: make read functions async
The commit backend at Google is cloud-based (and so are the other
backends); it reads and writes commits from/to a server, which stores
them in a database. That makes latency much higher than for disk-based
backends. To reduce the latency, we have a local daemon process that
caches and prefetches objects. There are still many cases where
latency is high, such as when diffing two uncached commits. We can
improve that by changing some of our (jj's) algorithms to read many
objects concurrently from the backend. In the case of tree-diffing, we
can fetch one level (depth) of the tree at a time. There are several
ways of doing that:
* Make the backend methods `async`
* Use many threads for reading from the backend
* Add backend methods for batch reading
I don't think we typically need CPU parallelism, so it's wasteful to
have hundreds of threads running in order to fetch hundreds of objects
in parallel (especially when using a synchronous backend like the Git
backend). Batching would work well for the tree-diffing case, but it's
not as composable as `async`. For example, if we wanted to fetch some
commits at the same time as we were doing a diff, it's hard to see how
to do that with batching. Using async seems like our best bet.
I didn't make the backend interface's write functions async because
writes are already async with the daemon we have at Google. That
daemon will hash the object and immediately return, and then send the
object to the server in the background. I think any cloud-based
solution will need a similar daemon process. However, we may need to
reconsider this if/when jj gets used on a server with a custom backend
that writes directly to a database (i.e. no async daemon in between).
I've tried to measure the performance impact. That's the largest
difference I've been able to measure was on `jj diff
--ignore-working-copy -s --from v5.0 --to v6.0` in the Linux repo,
which increases from 749 ms to 773 ms (3.3%). In most cases I've
tested, there's no measurable difference. I've tried diffing from the
root commit, as well as `jj --ignore-working-copy log --no-graph -r
'::v3.0 & author(torvalds)' -T 'commit_id ++ "\n"'` (to test a
commit-heavy load).
2023-09-06 19:59:17 +00:00
|
|
|
async-trait = { workspace = true }
|
2023-08-22 16:41:42 +00:00
|
|
|
config = { workspace = true }
|
|
|
|
git2 = { workspace = true }
|
2023-11-25 23:36:21 +00:00
|
|
|
hex = { workspace = true }
|
2023-08-22 16:41:42 +00:00
|
|
|
itertools = { workspace = true }
|
|
|
|
jj-lib = { workspace = true }
|
|
|
|
rand = { workspace = true }
|
|
|
|
tempfile = { workspace = true }
|