We were seeing non-deterministic behavior in randomized tests when
generating backtraces took enough time to cause transactions to group
in some cases, but not group in others.
Tests will need to explicitly opt into grouping if they want it by
setting the interval explicitly. We have tests in the text module that
currently test the history grouping explicitly, but I'm not sure
it's needed elsewhere.
This method returns the anchor range associated with each edit. The
anchor ranges allow you to determine how each edit interacts with
an existing anchor range that the edit has touched.
As part of #1405, we changed the way we performed undo and redo to
support combining transactions that were not temporally adjacent for
IME purposes.
We introduced a bug with that release that caused divergence
when performing undo: the bug was caused by only changing the visibility
of fragments whose insertion id was contained in the undo operation. However,
an undo operation also affects deletions which we were mistakenly not
considering. Randomized tests caught this but I guess we didn't run enough
of them.
Now, instead of using these versioned offset ranges, we locate the
fragments associated with a transaction using the transaction's
edit ids. To make this possible, buffers now store a new map called
`insertion_slices`, which lets you look up the ranges of insertions
that were affected by a given edit.
Co-authored-by: Antonio Scandurra <antonio@zed.dev>
* Moving the logic from Rope to text::Buffer makes it easier
to keep the Rope in sync with the fragment tree.
* Removing carriage return characters is lossier, but is much
simpler than incrementally maintaining the invariant that
there are no carriage returns followed by newlines. We may
want to do something smarter in the future.
Co-authored-by: Keith Simmons <keith@zed.dev>
This is calculated in `Rope` and uses the `bromberg_sl2` homomorphic
hash function to determine the fingerprint of a single chunk and
compose each chunk fingerprint into a single fingerprint for the entire
rope that is equivalent to hashing all the rope's bytes at once.