The idea is to (ab)use pest::error::Error type to pretty-print error
message with code span. pest::error::Error will be constructed upfront
to erase lifetime from Span<'i>/Position<'i>.
Baby step towards embedding matcher in RevsetExpression. If we had a fileset
language or regex pattern, we would probably want to parse it at this stage
so the syntax error can be reported without evaluation.
If you remove all refs from the backing Git repo and then run `jj git
import`, we would see that all commits disappeared from the Git repo,
so we would remove them from the jj repo too. However, we do that by
doing a history walk from old heads to the new heads, which includes
the root commit when the new heads is an empty set. That means that we
mark the root commit as abandoned, which led to a crash in
`rewrite.rs` (when we try pick the root commit's first parent to use
as parent for rebased commits).
I was trying to create a reproduction script for #412, but the script
ran into another bug first. The script removed all the local and
remote branches from the backing Git repo. I noticed that we would
then try to abandon all commits. We should still count Git HEAD's
target as visible and not try to abandon it. This patch fixes that.
Sometimes a tagged commit is not in any branch on the remote, but we
should still consider them upstream and not include them in the
default log template.
This was reported by @colemickens but now that I think about it, I
remember seeing such commits in the Git core repo (v1.4.4.5 and a few
commits before it were never merged into main).
We don't have a good way of testing this because we don't have a
command for creating tags.
Closes#681
The function has only one caller since 25b922cd0b and it's pretty
small. Inlining also means we can reuse the joined paths created in
it, so I did that by extracting variables for them.
Since 'merges()' just filters the candidates set per item, it doesn't need
a candidates argument. Perhaps, 'merges(x)' could be a predicate to select
merge commits within a subgraph 'x', but I don't know if that would be
useful.
Since 'filter(expr)' is identical to 'expr & filter()', it can be rewritten
in order to minimize the candidates set to be scanned.
The implementation is somewhat similar to revsetlang.optimize() of Mercurial,
but I've split the tree rewriting logic to several steps. I think that's good
for maintainability and should help us conclude that a recursion will
eventually terminate.
This helps to match '(filter, _) | (_, filter)' to rewrite the expression
tree. Only one predicate is allowed for now, but I think it can be extended
to internalize 'f(c) & g(c)' as '(g*f)(c)' to eliminate redundant lookup
of commit object.
I had `init.defaultBranch = main` in my global config (just being
rolled out internally at Google, it seems), which made
`test_import_refs_reimport_head_removed()` and
`test_fetch_initial_commit()` fail. This fixes it.
Since d56ae79d3f, `WorkingCopy` no longer reads `.gitignores`
directly from `$HOME/.gitignore`, so we don't need the workaround to
prevent it in the tests.
More workspace-derived parameters will be added, and I don't think wrapping
with Option for each makes sense because all parameters should be available
if workspace exists.
It seems a bit invasive that RepoPath constructor processes an environment
like cwd, but we need an unmodified input string to build a readable error.
The error could be rewrapped at cli boundary, but I don't think it would
worth inserting indirection just for that.
I made s/file_path/fs_path/ change because there's already to_fs_path()
function, and "file path" in RepoPath context may be ambiguous.
The native backend is just a proof of concept and there's no real
reason to use it other than for testing, so let's reduce the risk of
accidentally creating repos using it.
Previously an expression 'foo-bar-' failed to parse because
1. try first rule: 'foo-bar-' matches (identifier_part+ ~ '-')+, but the
trailing '' doesn't match identifier_part+
2. fall back to second rule: 'foo' matches identifier_part+
=> (identifier 'foo')
Instead, we need to consume as much (identifier_part ~ '-' ~ ...) as possible
before falling back to the identifier_part rule.
I think the trailing + of identifier_part+ is redundant, so removed it as
well.
I had intended for the `features = ["toml"]` to enable *only* the
`toml` feature, but I forgot to disable the default features so it had
no effect. This shrinks the binary from 17.4 MiB to 16.8 MiB.
Let WorkspaceCommandHelper clone it. WorkspaceCommandHelper could return
workspace_id by reference, but doing that would introduce noisy .clone()
calls and lifetime mess.
For consistency with the tree_state handling. This isn't that simple and
concise compared to the tree_state one, so I'm fine to drop the series.
I've extracted {operation_id, workspace_id} pair so these values can be
safely initialized by OnceCell. The extracted struct is named after the
"checkout" file.
Here OnceCell<T> serves as RefCell<Option<T>>, but it doesn't require runtime
Ref/RefMut wrapper. This allows us to get rid of some .clone() calls needed to
hide Ref<_> from public interface.
I'll make TreeState::snapshot() return a boolean denoting whether tree_state
is updated or not. New tree_id can be obtained from TreeState, so let's
remove it from the return value.
While making tree_state() return RefMut<TreeState> instead of RefMut<Option<_>>,
I felt uncomfortable that tree_state(&self) returned a mutable reference. So
this patch splits it into tree_state() and tree_state_mut().
I was reading a draft of "Git Rev News: Edition 91" [1] where Peff
mentions some unfinished patches to allow negative timestamps in
Git. So I figured I should add support for that before I forget. I
haven't checked if libgit2 supports it, so it might be that our Git
backend still doesn't support it after this patch.
[1] https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-91.md
When the remote rejects a push, we now say something like this:
```
Remote rejected the update of some refs (do you have permission to push to ["refs/heads/main"]?)
```
That message could be formatted better, but this seems good enough for
now. We should probably have some more custom conversion from
`GitPushError` to `CommandError` in the CLI layer.
This changes `RepoLoader` to take a map of functions that load a
specific type of backend, keyed by the backend type. The backend type
is read from `.jj/repo/store/backend`.
We currently determine if the repo uses the Git backend or the local
backend by checking for presence of a `.jj/repo/store/git_target`
file. To make it easier to add out-of-tree backends, let's instead add
a file that indicates which backend to use.
The `ReadonlyRepo::init_*()` functions were unused or used only in
tests. Let's remove them, thereby making the repo less aware of
specific backend implementations.
By calling `repo.loader()` on the existing repo, we reuse the same
commit store etc. That should make it easier to add out-of-tree
backends by removing one place we'd otherwise need to pass in a way of
creating a backend (now we reuse the instance that was already created
for the existing `repo`).
In many of these places, we don't need an owned value, so using a
reference means we don't force the caller to clone the value. I really
doubt it will have any noticeable impact on performance (I think these
are all once-per-repo paths); it's just a little simpler this way.
I feel it doesn't make sense for a simple getter function to create an
owned vec after 0108673087 "backend: let each backend handle root commit
on write."
This moves the logic for handling the root commit when writing commits
from `CommitBuilder` into the individual backends. It always bothered
me a bit that the `commit::Commit` wrapper had a different idea of the
number of parents than the wrapped `backend::Commit` had.
With this change, the `LocalBackend` will now write the root commit in
the list of parents if it's there in the argument to
`write_commit()`. Note that root commit itself won't be written. The
main argument for not writing it is that we can then keep the fake
all-zeros hash for it. One argument for writing it, if we were to do
so, is that it would make the set of written objects consistent, so
any future processing of them (such as GC) doesn't have to know to
ignore the root commit in the list of parents.
We still treat the two backends the same, so the user won't be allowed
to create merges including the root commit even when using the
`LocalBackend`.
I had made the backends unaware of the virtual root commit because
they don't need to know about it, and we could avoid some duplicated
code by putting that in `Store` instead. However, as we saw in
b21a123bc8, the root commit being virtual has some user-visible
effects (they can't create a merge with the root and some other
commit). So I'm thinking that we may want to make the root commit an
actual commit, depending on which backend is used. Specificially, when
using the Git backend, we cannot record the root commit as an actual
parent since Git would fail when trying to look it up. Backends that
don't need compatibility can make the root commit an actual commit,
however.
This commit therefore makes the backends aware of the root commit. It
makes it remain a virtual commit in the Git backend, and makes it an
actual commit in the `LocalBackend`.
This commit breaks any existing repos using the `LocalBackend`, but
there shouldn't be any such repos other than for testing.
I feel the original -------/+++++++ pair is slightly confusing because
each half can be a separator by itself. I don't know what character other
than '-'/'+' is preferred, but let's pick '%' (for "mod") per @martinvonz
suggestion.
`wc_commit` seems clearer than `checkout` and not too much longer. I
considered `working_copy` but it was less clear (could be the path to
the working copy, or an instance of `WorkingCopy`). I also considered
`working_copy_commit`, but that seems a bit too long.
This will be a basic building block of 'jj log PATH'. The implementation
is naive, but works fine for small repos like jj. For mid-size repos,
there would be various areas which need to be optimized.