With this change, we start writing the incremental index to disk, so
the next reader won't have to re-read the commits and create the
index.
As of this change, we simply write a new index file for each
transaction. That will clearly mean that the stack of files gets deep
pretty quickly. For now, the user will have to do `jj debug reindex`
when things get slow. I plan to change it so instead of writing an
incremental index file every time, we first check if the new index
file would have at least as many commits as the parent file, and if it
will, we write a combined one instead. That should apply recursively,
so we'd have O(log n) index files.
I don't know why I made the walk stop at heads instead of indexed
commits before. Perhaps I did it because it's cheap to check in the
set of head. However, it gets very expensive to walk all the way back
to the root if the parents are not in the set of heads.
With tons of groundwork done, wee can now finally keep the index up to
date within a transaction! That means that we can start relying on the
index to always be valid, so we can use it e.g. for finding common
ancestors within a transaction. That should help speed up `jj evolve`
immensely on large repos.
We still don't write the updated index to disk when the transaction
closes. That will come later.
`Transaction::add_head()` currently invalidates the whole evolution
state. We've had support for incrementally updating evolution since
4619942a57. We should start taking advantage of that. Let's add a
fast-path in `Transaction::add_head()` for the common case where we
add a single commit on top of an existing head. That cheap an simple
to check for. However, it won't cover the case of adding a child off
of a non-head. It's still a good start.
I want to keep the index updated within the transaction. I tried doing
that by adding a `trait Index`, implemented by `ReadonlyIndex` and
`MutableIndex`. However, `ReadonlyRepo::index` is of type
`Mutex<Option<Arc<IndexFile>>>` (because it is lazily initialized),
and we cannot get a `&dyn Index` that lives long enough to be returned
from a `Repo::index()` from that. It seems the best solution is to
instead create an `Index` enum (instead of a trait), with one readonly
and one mutable variant. This commit starts the migration to that
design by replacing the `Repo` trait by an enum. I never intended for
there there to be more implementations of `Repo` than `ReadonlyRepo`
and `MutableRepo` anyway.
When a transaction gets dropped without being committed or explicitly
discarded, we currently raise an assertion error. I added that check
because I kept forgetting to commit transactions. However, it's quite
normal to want to drop transactions in error cases. The current
assertion means that we panic and don't report the actual error to the
user in such cases. We should probably audit the code paths where we
commit transactions and decide for each if we simply want to to
discard the transaction or not. In some cases, we may want to commit
the transaction without integrating it in the operation log
(i.e. without creating a file entry in .jj/views/op_heads/). However,
we can do that later. For now, let's just make sure we don't panic
when dropping the transaction in release builds.
I'm about to move `MutableRepo` to the `repo` module so it becomes
more important to encapsulate access. Besides, the new functions
introduced in this commit reduces some duplication.
There's still one access of `MutableRepo::evolution` in
`Transaction::new()`. I'll address that next by adding a factory
function to `MutableRepo`.
I've been confused twice that rebasing an open commit so it results in
conflicts doesn't show the conflicts in the log output. That's because
we create a successor instead if a commit with conflicts is open. I
guess I thought it would be expected that a child commit was not
created. Since it seems surprising in practice, let's change it and
we'll see if the new behavior is more or less surprising.
Mercurial's "phase" concept is important for evolution, and it's also
useful for filtering out uninteresting commits from log
output. Commits are typically marked "public" when they are pushed to
a remote. The CLI prevents public commits from being rewritten. Public
commits cannot be obsolete (even if they have a successor, they won't
be considered obsolete like non-public commits would).
This commits just makes space for tracking the public heads in the
View.
Git refs are important at least for understanding where the remote
branches are. This commit adds support for tracking them in the view
and makes `git::import_refs()` update them.
When merging views (either because of concurrent operations or when
undoing an earlier operation), there can be conflicts between git ref
changes. I ignored that for now and let the later operation win. That
will probably be good enough for a while. It's not hard to detect the
conflicts, but I haven't yet decided how to handle them. I'm leaning
towards representing the conflicting refs in the view just like how we
represent conflicting files in the tree.
I've forgotten to close a transaction a few times and while the
message ('assertion failed: self.closed') is clear to me now, it
probably won't be clear to others or to me in the future.
This extracts a bit from `Transaction::check_out()` for taking a
Conflict, materializing it, and writing the resulting plain file to
the store. It will soon be reused.
We currently recalculate the entire evolution state whenever a new
commit is added within a transaction. That's clearly wasteful. This
commit makes the state-update incremental.